Email or username:

Password:

Forgot your password?
Hector Martin

Just had another argument about curl|sh, so I'm going to say this top level for future reference.

The way we use curl|sh is as secure, or more secure, than traditional distro distribution mechanisms (e.g. ISO images with hashes or PGP signatures) for 99.9% of users. If you think otherwise, you don't understand the threat models involved, and you're wrong.

If you are in the 0.1% that actually cross-references PGP keys against multiple sources, exchanges keys in person, and that kind of thing, then you could indeed actually benefit from a more secure distribution mechanism. You're also, unfortunately, not a significant enough fraction of our user base for us to spend time catering to your increased security demands, that we could instead be spending improving security for everyone (such as by working on SEP support for hardware-backed crypto operations, or figuring out how to actually offer FDE reasonably in our installer).

And if you're not manually verifying fingerprints with friends, but curl|sh still gives you the ick even though you have no solid arguments against it (you don't, trust me, none of you do, I've had this argument too many times already), that's a you problem.

72 comments
Lona Theartlav

@marcan And yet the alternative - to download the script, glance at it without anything really sinking in, then run it - feels better despite being exactly as (in-)secure.

It is, indeed, a very human problem.

Rich Felker

@theartlav @marcan That alternative is not much better but also unacceptable. No users should be instructed to run privileged scripts from random sources that don't and can't understand the nuances of their system and make unstructured, undocumented, non automatically reversible changes to it. The security aspect is not just "someone may sub in a malicious script when UA is curl". It's "random changes to system break security invariants".

Hector Martin

@dalias @theartlav

So you're opposed to running any OS installer? Because that's what all OS installers do.

We actually make *zero* changes to the running macOS other than an online resize of the partition, and all the actions are user-driven (the script doesn't just run off doing stuff, it's interactive). Plus the way platform security is designed on Apple Silicon, different OSes have no privileges over each other (assuming you enable FDE to provide the core isolation), and no machine-level global changes are made at all.

I don't see how you expect an OS installer to work in any other way, short of asking the user to do the installation as a completely manual process.

@dalias @theartlav

So you're opposed to running any OS installer? Because that's what all OS installers do.

We actually make *zero* changes to the running macOS other than an online resize of the partition, and all the actions are user-driven (the script doesn't just run off doing stuff, it's interactive). Plus the way platform security is designed on Apple Silicon, different OSes have no privileges over each other (assuming you enable FDE to provide the core isolation), and no machine-level global...

Rich Felker

@marcan @theartlav No, an OS installer is creating a known system not modifying an existing one. (Well short of multiboot which I have Opinions™ about anyway)

Hector Martin

@dalias @theartlav Then you're not in fact against our use case, so I rest my case :)

Rich Felker

@marcan @theartlav > not just "someone may sub in a malicious script...

Not only is that still relevant, just not usually the biggest problem; validating the curlbash antipattern by copying it, even in a context where it's less dangerous, seems bad too.

Rich Felker

@marcan @theartlav The flip side is it detracts from your credibility when you do it. Folks know "curlbash bad, projects recommending it are security clowncars" as a rule but don't understand the subtleties to evaluate "well in this instance it's not as bad".

Geoffrey Thomas

@dalias @marcan @theartlav But there is no such rule! Plenty of projects that are _not_ security clowncars recommend curl|bash for thoughtful reasons. Plenty of projects that are security clowncars ship source tarballs with unreproducible ./configure scripts.

There is a _perception_ that it's bad, yes. I think a respected project using curl|bash is just as likely to to rehabilitate curl|bash and fix that perception, especially if (as here, as Sandstorm did, etc.) they write about why it's okay.

Geoffrey Thomas

@dalias @marcan @theartlav One argument in favor of curl|bash: all realistic alternatives - third-party rpm/deb/etc., pip install, building from source, etc. - are just as capable of running arbitrary code but they _look_ less dangerous. curl|bash is honest about its risk and makes people think whether they trust the source.

If a project can use a sandboxed app store or run on a web page, that's meaningfully better, but almost no project considering curl|bash can do that.

Rich Felker

@geofft @marcan @theartlav The core problem with curlbash is the *philosophy* - presume the user doesn't know how to admin their own system, install deps they need, etc. and ask them to let a script you wrote play admin on their box. (Along with that, it acts as license *not to document* what the user would need to do things themselves.)

Rich Felker

@geofft @marcan @theartlav curlbash should not be "rehabilitated". It's *always wrong*, just to varying degrees.

Your comparison of "unreproducible configure scripts" doesn't work because the scope of those is such that they run fine in a build sandbox where you discard everything but the build artifacts. curlbash on the other hand is full of commands to install packages, modify config files, etc.

Geoffrey Thomas

@dalias @marcan @theartlav Do any users who are not aware of the risks of curl|bash run ./configure in a build sandbox?

Also what build sandbox do you use? I would like to try to escape it. :)

FSMaxB

@marcan Also the way PGP signed repositories work is often vulnerable to downgrade attacks. Especially if you get your packages from a mirror.

Meriel :leafeon:

@marcan most people will happily download a package tarball from a random 3rd party mirror for their distro (which even today aren't always signed, especially w/ the proliferation of 3rd party repos on Arch, Debian, Ubuntu, etc) which contains a little shell script that in all likelyhood runs as root but wrinkle their nose at curl | sh, which to me just shows that most people build their opinions about the security of things almost entirely on the perceived aesthetics. Debian still uses plain http URLs for its sources.list with no automatic https redirect on the server side

@marcan most people will happily download a package tarball from a random 3rd party mirror for their distro (which even today aren't always signed, especially w/ the proliferation of 3rd party repos on Arch, Debian, Ubuntu, etc) which contains a little shell script that in all likelyhood runs as root but wrinkle their nose at curl | sh, which to me just shows that most people build their opinions about the security of things almost entirely on the perceived aesthetics. Debian still uses plain http...

Simon Richter

@omni @marcan yes, because https is not part of the security model. It just breaks proxies.

Stephen Kitt

@marcan (Not disagreeing, at all.) This isn’t a problem in your case since the Asahi download script has a truncation guard, but the main issue with curl|sh in many projects is that a partially downloaded script can still be executed, sometimes with no indication that something is missing. To be clear, that’s *not* an authentication problem, it’s a user experience problem.

Dek 👨‍🚀🐧🚀 (

@marcan
I think the scariest thing is to make curl|bash normalized and then people running this from random websites for every single tool all the time. Then I don't have the same guarantees that a project like Asahi has.
Maybe a project is malicious, or their website is compromised.

I'm mostly scared about malicious project personally.

~swapgs

@portaloffreedom @marcan I don’t see this as a counter-argument against curl|bash—if you’re pulling a malicious project or from a compromised backend, it’s already game over anyway? It’s no different from pulling a random software dependency from whatever registry your ecosystem offers.

Dek 👨‍🚀🐧🚀 (

@swapgs
@marcan
The only difference for me being that creating a website for a malicious project and paying google to spam people to download it is much easier than having a package in a repository.

But this discussion is super interesting, I didn't expect to get my base ideas on software distributions being challenged this deeply today.

Hector Martin

@portaloffreedom @swapgs There are very good reasons to distribute software via repositories, which is why the App Store exists. But sometimes the vendor-blessed repository isn't suitable (e.g. more traditional FOSS packages), and then what do you do? Install an alternate repository (Homebrew) or a whole new OS with its own package manager (Fedora Asahi). And in both of those cases, you use curl|sh to do it :)

Christian Lauf

@marcan Package management systems. Automatic updates. Patch management systems.
There is more to properly build packages then the way the software gets installed.

Curl|sh does none of that.

Hector Martin

@kharkerlake We have a package management system. curl|sh is how you bootstrap it by actually installing the OS. This is an OS, not a random software download. curl|sh is perfectly appropriate to bootstrap a whole new software ecosystem which, yes, has packages and automatic updates and all that.

Christian Lauf

@marcan Ok, that indeed rebuts my arguments. I was under the assumption we do talk about the classical "curl|sh and now you have some software on your system without anything else". I just have never seen it to be used to bootstrap an OS.

The thing is that 99,9% of all curl|sh scripts I bothered to look at were a horrible mess.
Yes, a prejudice. But sadly a rather true one. And I think that is why you are getting angry comments for it..

Akseli :quake_verified:​ :kde:

@marcan im mostly worried about a bug in the shell script that deletes stuff when running. No need to be malicious even.

Hector Martin

@aks Bugs are a possibility with any and all software. This is completely tangential to the delivery mechanism.

Akseli :quake_verified:​ :kde:

@marcan my point is that when downloading something from package manager it has likely way less chance to delete my home folder than piping a script to bash, due to more excessive testing and wider usage.

But in general i agree with your argument, when it comes to security nobody really cares, they just want to get things done.

Hector Martin

@aks I don't see how something coming via package manager means it gets wider testing. It might, or might not, mean it gets a few more eyeballs, if it was packaged by a third party.

But we are shipping an OS. We *are* the package manager. If you don't trust us not to screw up then it doesn't matter how the download works.

Akseli :quake_verified:​ :kde:

@marcan Not critiquing how you do it, nor saying what i think is "correct", was more a feeling thing. :) Perceived security vs actual security.

I always read the script anyway, but more for the curiosity than for the verification. And i understand your usecase for it!

mort

@marcan My $0.02: curl|sh is kinda not great when you're doing it with random people's software, if I don't know the developer I prefer using a distro package or at least building from source (arguably easier to hide nefarious stuff in an install script than a public git repo). But if I trust you guys enough to literally put you in charge of my ring 0 I trust you enough to curl|sh your script

shironeko
@marcan Totally agree with you that cur|sh to install an OS works just as fine as any. I think what's happening is simply classical conditioning; people are conditioned by the 99.99% of cases where curl|sh is used as sloppy packaging/proprietary crapware/accidental rm -rf/shell script footguns/etc and the response becomes automatic and subconscious.
crepererum

@marcan I think for the OS bootstrap use case, curl|sh is totally fine. For installing apps though I think it's not only about trust but about scope: a script can do anything (including having a bug that deletes your entire disk), while most package installers will (more or less sanely) just place some files. Flatpak&co will even scope the file placement to a container. Now sure the installed app/tool can still have bugs, but I think limiting the scope of possible operations is always good.

Jo Shields

@crepererum @marcan hum? Both rpm and deb packages can run arbitrary scripts as root on install without extra user intervention. Installing a package is giving root on your system to the package uploader.

crepererum

@directhex @marcan yes, but these are extras and not used by the majority of packages. For most packages it's just placing files. And even the scripts that do run are more limited in scope than a shell script that tries to tries to do everything.

Jo Shields

@crepererum @marcan are you verifying this statement, or merely assuming? I was a Debian Developer for 14 years. Just because a deb *can* contain nothing but files, doesn’t in any way prevent a maintainer from doing whatever they like in postinst

crepererum

@directhex @marcan DO the majority of the packages use postinst or COULD they use it?

Space

@marcan excuse my ignorance, but can you explain why the distro model is less secure? are you hinting at control of mirrors not run by the distro itself and mirrors not serving via https? anything else i missed? (not trying to be a smart ass, just trying to get the full picture)

Hector Martin

@space Yes, the problem is distros that continue using the "random third party mirror" model to distribute their installers/ISOs. Even if they consistently use HTTPS, it's much easier to compromise a random mirror since they are run by many third parties. Of course when they do this they invariably offer at least a SHA hash of the iso or something on the primary website, but most people don't check that.

Random third party mirrors are fine for automated install systems that verify signatures without user interaction (e.g. package repositories), but I consider it an antipattern in this day and age for direct user distribution because the vast majority of users that don't verify anything will be exposed to the attack surface of countless third parties. For Asahi we use a commercial CDN, so there is a single point of compromise (and if you hack into Bunny.net or Amazon there are probably juicier targets than us).

We could further improve this by making the bootstrap script a trust root that verifies everything else directly, and serving it from an even more tightly controlled server, though we don't do that today (yet).

@space Yes, the problem is distros that continue using the "random third party mirror" model to distribute their installers/ISOs. Even if they consistently use HTTPS, it's much easier to compromise a random mirror since they are run by many third parties. Of course when they do this they invariably offer at least a SHA hash of the iso or something on the primary website, but most people don't check that.

HAMMER SMASHED SIR 🇺🇦

@marcan I've a few releases, including a major component that's essentially in every respectable distro's default installation where I accidentally signed the tarball with a key that nobody trusts and basically all distros i am aware of just updated to the newer version, no big deal

Hector Martin

@lkundrak I help maintain a bunch of Fedora packages and I don't even know how to set up sig verification for source files. Whatever I download on my workstation is what gets hashed and blessed as the real source.

But honestly, most projects these days don't even sign releases, they just host directly from GitHub tags or what have you. The assumption is that GitHub itself is secure enough and people know how to keep their repos secure (or if they get directly compromised, people will notice quickly). I do at least sign my tags on Git, but literally nobody checks that and I've had other contributors push release tags without a sig before and nobody cared.

Given the rather few stories of outright infra compromises leading to actual downstream compromise, and the most recent Jia Tan social engineering episode (which *gasp* even had tarball signatures, and in fact was *aided* by out-of-band tar releases not being directly sourced from GitHub!), I think we're doing okay on infra and we should be a lot more worried about social engineering and hidden backdoors than that.

(Oh yeah, and the part where the Intel employee responsible for maintaining a certain Linux driver deliberately introduced a security bug because he was lazy and admitted so in a comment and the commit message, and nobody cared, and I just found out 3 years later... yeah, we really have much bigger things to worry about than package signatures, seriously)

@lkundrak I help maintain a bunch of Fedora packages and I don't even know how to set up sig verification for source files. Whatever I download on my workstation is what gets hashed and blessed as the real source.

But honestly, most projects these days don't even sign releases, they just host directly from GitHub tags or what have you. The assumption is that GitHub itself is secure enough and people know how to keep their repos secure (or if they get directly compromised, people will notice quickly)....

chebra

@marcan Funny how it keeps happening, over and over, isn't it? So many people discussing with you about curl|sh... all those insignificant people, with their wrong arguments...

Hector Martin

@chebra Funny enough today is the first time anyone has made a non invalid argument (that they're in the aforementioned 0.1%), hence this discussion and specific explanation of that case.

But no, I don't particularly care about how many people are wrong and use invalid arguments. There are lots of people who are wrong on the internet. Everyone I actually care about and trust agrees our usage of curl|sh is fine, as do the vast majority of our users who have no issue with it. That a small number of loud voices disagree doesn't make them right.

@chebra Funny enough today is the first time anyone has made a non invalid argument (that they're in the aforementioned 0.1%), hence this discussion and specific explanation of that case.

But no, I don't particularly care about how many people are wrong and use invalid arguments. There are lots of people who are wrong on the internet. Everyone I actually care about and trust agrees our usage of curl|sh is fine, as do the vast majority of our users who have no issue with it. That a small number of...

chebra

@marcan I'm glad I had the chance to talk to someone who is never wrong, thank you for that.

DELETED

@marcan There is no way to undo a curl|sh like you can do apt remove or apk remove.

Hector Martin

@metric_hen We're an OS. You don't install OSes from a package manager. We *are* the package manager.

(And we have a wiki page with uninstall/cleanup instructions, which many OSes don't even have)

curl|sh is an executable delivery mechanism. Your complaint is about people delivering random poorly-designed user-level software installers via curl|sh, not curl|sh itself.

DELETED

@marcan Then its different. It didn't seem clear from your profile that you are an OS. Asahi Linux i guess?

DELETED

@marcan nvm i just searched for Asahi Linux and now i get it. I thought this was about installing random software package from the web but that doesn't seem to be the case.

Raphael Lullis

@marcan

I don't see any "technical" issue with curl|sh, but do you really think that normalizing the act of downloading and executing random code from the internet is a good idea, without any social proof?

Sure, my mom wouldn't be doing curl|sh anyway but I'm thinking of a linux newbie who learns that "it is okay to curl|sh" and ends up installing all sorts of crap in their system.

Hector Martin

@raphael A Linux newbie who installs Asahi Linux via curl|sh from macOS hopefully understands that this is just for the OS and not how you then go on to install arbitrary Linux software. We have a package manager :p

Geoffrey Thomas

@raphael @marcan I actually think yes, it's good for legitimate projects to legitimize curl|sh precisely because it has an unfairly maligned reputation and this is the way to fix that.

CounterPillow

@marcan
$ curl --cert-status alx.sh
curl: (91) No OCSP response received

Don't worry though, I'm sure your Let's Encrypt cert is safe from hijack, just ask these guys: notes.valdikss.org.ru/jabber.r

Hector Martin

@CounterPillow That one's probably something to raise with Bunny.net, but honestly, being targeted by LEOs is outside my threat model.

Jonathan Isom

@marcan I personally don’t have a problem with your assessment.

I do think if the Asahi intaller was an “app” download for macOS, fully graphical, it would be hands down the easiest and safest dual boot install option for just about any consumer linux platform. And Apple requires signing, so +1 on the security side. Not saying that stops all malicious actors, but better than the 0.1%

Hector Martin

@jeisom That's definitely an idea we've had for a long time, and yes, if it ever happens, we'd switch away from curl|sh and get it signed by Apple and all that, and it would indeed be a nonzero security improvement thanks to the signing - but the major reason to do this is the UX, not the "stop using curl|sh" part.

Jonathan Isom

@marcan oh definitely. I wouldn’t trust windows to not mess up something with an update, but with the way Apple set up their AS machines, it just seems less likely to screw up linux installs(firmware issues aside). The UX would be miles(km) apart.

ZanaGB

@marcan the only reason I could oppose to the curl|sh method is the potential that without enough self-checking, a broken or malformed script could be ran, resulting in damage during. Say, the partitioning stage.

But how would different could it be from an ISO that boots to a broken installer?

Hector Martin

@zanagb The script has a basic truncation guard and proper error handling, so there's no real way for unintentional truncation/corruption to go undetected. The bulk of the installer is downloaded as a separate step (and if the download or unpack fails the script aborts). The top level curl part is just a very simple bootstrap.

Of course you can have things like your network die mid-install, but there isn't much we can do about that (the installer streams the actual OS install data directly to the destination, this is a good thing since it avoids having to leave aside staging space and works from recovery mode where no staging space may be available at all). We do have retries and such for intermittent errors, and also reduce the block size when errors happen to help out with flaky connections.

If something does fail and abort, well, manually doing the cleanup/uninstall steps and trying again isn't a huge deal. The actual steps to make the install bootable happen at the end, so it's not possible to end up with an "accidentally booting but actually subtly incomplete/corrupted" install (I think).

@zanagb The script has a basic truncation guard and proper error handling, so there's no real way for unintentional truncation/corruption to go undetected. The bulk of the installer is downloaded as a separate step (and if the download or unpack fails the script aborts). The top level curl part is just a very simple bootstrap.

Meowrio 🔒
@marcan It's interesting how people will bash any security decision they consider to be vulnerable, completely ignoring the idea of security being a cost-benefit calculation.
If your project does not solve problems in a high-security domain, chances are you don't need to worry about secret services doing DNS spoofing on your client's PC during installation (also even that example would be the client's (or client's network admin's) fault).
Security is an optimization problem and if you have the millions of dollars and minutes in budget to optimize it, that's awesome and literally optimal, but living in reality also means doing pragmatic solutions.
@marcan It's interesting how people will bash any security decision they consider to be vulnerable, completely ignoring the idea of security being a cost-benefit calculation.
If your project does not solve problems in a high-security domain, chances are you don't need to worry about secret services doing DNS spoofing on your client's PC during installation (also even that example would be the client's (or client's network admin's) fault).
AlexM

@marcan Linux is an absolute mess in this regard. PGP is no good. Need a vendor to provide a distro with the Apple code signing verification model. Centralized dev cert distribution with CRLs, so there can be code signing verification on executables, packages, (and scripts). But it will never happen.

Eric Curtin

@marcan the one flow I can think of where it might break is if you lose your connection mid-delivery of the file, although I have not tested this. Could execute half a script.

But yes I agree, if you don't trust TLS, we might as well just deem https in general insecure.

Eric Curtin

@marcan the other benefit of a package (rpm) say from Fedora, not anyone can become a Fedora packager, you need a sponsor who trusts you. This could potentially be obtained via social engineering of course.

But of course in this case it's a macOS installer and I don't know how auditing brew packagers works 😊

But I generally agree some of the arguments around curl|sh are silly

mei

@ecurtin @marcan the general idiom I’ve seen is to have the script consist of only function definitions, followed by a call to main

Hector Martin

@ecurtin That's why we have a truncation guard (but even before we had it, I'm pretty sure for any arbitrary truncation point you could pick nothing bad would happen, given the simplicity of the bootstrap script).

Hector Martin

@christmastree social.treehouse.systems/@marc

TL;DR using a commercial CDN is more secure than piles of random third-party volunteer mirrors when you don't have an automated chain of trust system (like when you're just offering ISO downloads with a SHA hash that most users won't bother to verify).

curl|sh, by virtue of executing code, actually allows you/any distro to establish an automated chain of trust and not even have to trust the mirrors, though we don't do that yet ourselves since our attack surface is small since we use a CDN anyway. For distros that insist on the random mirror approach, *switching* to a curl|sh script served from the home page that chooses a mirror, then downloads the file and verifies it, would indeed increase their security.

@christmastree social.treehouse.systems/@marc

TL;DR using a commercial CDN is more secure than piles of random third-party volunteer mirrors when you don't have an automated chain of trust system (like when you're just offering ISO downloads with a SHA hash that most users won't bother to verify).

DELETED

@marcan MY personal problem with curl|sh is - unless it's not a one-liner - that the script gets executed immediately, leaving no chance to check the script by themself. But if someone's reaaally interested, they could just cut the sh part and check the script by themself and THEN run it.

Also: keeping the software up to date. Best example would be rclone, which can also be installed via curl|sh. Although it downloads the latest version, it doesn't keep track of new updates. This might not be a thing with Asahi - I unfortunately don't use it - but that's the best example I could've given.

@marcan MY personal problem with curl|sh is - unless it's not a one-liner - that the script gets executed immediately, leaving no chance to check the script by themself. But if someone's reaaally interested, they could just cut the sh part and check the script by themself and THEN run it.

Also: keeping the software up to date. Best example would be rclone, which can also be installed via curl|sh. Although it downloads the latest version, it doesn't keep track of new updates. This might not be a thing...

Samantha
@marcan just wait until they find out about how brew is installed on macOS
mppf

@marcan

The worst-case scenario isn't that your web server is hacked and somebody starts installing malware instead of your tool. The worst case scenario is that this happens and the source of malware goes on undetected for years because the web server gives different scripts to different people.

Additionally, curl|sh will seriously hamper incident response people figuring out the source of malware, because it doesn't save the script that was run anywhere.

1/2

mppf

@marcan

If you think about the open-source security model, it's arguably more based on auditing than anything else. Because everyone can see the same code, we like to think someone will notice a problem. curl|sh breaks that because people might get different code.

A reasonable alternative is to replace the paste-able curl|sh command with a paste-able command sequence which 1) downloads the install script and saves it to a file 2) verifies a checksum on that file 3) executes it

2/2

Hector Martin

@mppf *Any* download mechanism can serve different files to different people. Your idea just moves the attack point to the web server serving the instructions, such that they serve different checksums to different people, to match with the different files they're going to get.

You are just one more person arguing against curl|sh while completely missing that the problems you raise literally apply to every other competing approach. Seriously, stop and think please. I'm tired of having the same conversation over and over again.

The only way you can verify *anything* is against an existing root of trust, and the only root of trust that exists on the system is the WebPKI, which means you can trust a server is not spoofed, and yes, means you still have to trust the server itself, because there's no way around that (other than switching to Apple's walled garden codesigning, which has its own root of trust and yes, is in the cards some day, but that's a much more difficult project and it's never what the anti curl|sh people actually suggest).

@mppf *Any* download mechanism can serve different files to different people. Your idea just moves the attack point to the web server serving the instructions, such that they serve different checksums to different people, to match with the different files they're going to get.

You are just one more person arguing against curl|sh while completely missing that the problems you raise literally apply to every other competing approach. Seriously, stop and think please. I'm tired of having the same conversation...

Hector Martin

@mppf If our web server gets hacked it could serve different images to different people regardless of the verification/delivery mechanism. The attacker just has to change the SHA hash or PGP key or whatever else security theatre approach you use to "verify" the file with data coming from a web server anyway.

The script is just a bootstrap. The actual installer gets downloaded and unpacked to /tmp. Yes a malicious script could do something non persistent directly in the bootstrap. But a malicious download could also self-modify to erase the malicious part after it's done. This is a red herring.

That is: none of your arguments are arguments against curl|sh that don't also apply to everything else.

@mppf If our web server gets hacked it could serve different images to different people regardless of the verification/delivery mechanism. The attacker just has to change the SHA hash or PGP key or whatever else security theatre approach you use to "verify" the file with data coming from a web server anyway.

mppf

@marcan That's true but misses something important about security measures. They are about making it harder for somebody to do something bad, not to necessarily prevent it entirely. Having to change a SHA hash in a coordinated way is tricky for an attacker and significantly increases the difficulty.

Also, publish the SHA hash etc makes it much easier for people to compare notes and especially makes figuring out what happened easier in the context of incident response.

Go Up