@gregeganSF

Good points.

The issues isn't "software doing harm", it's "there being downstream consequences of software failing."

To see the fallacy in simplest form, consider a program whose entire job was to open or close a switch. Would that be regulated as mission-critical/life-threatening? And yet if what the switch opened was medication, a dangerous chemical, or a valuable food or drug necessary to life, an error could be catastrophic. One can only know what things matter in context. They are not a property of the device. So all kinds of devices that are hijackable because no one imagined they needed to be hardened can end up used in places that make us all vulnerable.

Even ignoring hacking, a cell phone that fails unexpectedly when you're hurt or lost in a forest can be a serious problem.