The other side of it is this. By law, a licensed civil engineer must sign off on a civil engineering project. When doing so, the engineer takes personal legal liability. But the fact that the company needs an engineer to take responsibility means that if management tries to cut too many corners, the engineer can tell them to take a hike until they are willing to do it properly.
Both sides have to go together. You have to put authority and responsibility together. In the end, we won't get better software unless programmers are given both authority AND responsibility. Right now programmers are given neither. If one programmer says no, they are just fired for another one who will say yes. Management finds one-sided disclaimers of liability to be cheaper than security. And this is not likely to change any time soon.
Unfortunately the way that these things get changed is that politicians get involved. And let me tell you, whatever solution they come up with is going to be worse for everyone than what we have now. It won't be until several rounds of disaster that there's a chance of getting an actually workable solution.
Engineering uses repeatable processes that will ensure the final product works with a safety margin. There is no way to add a safety margin to code. Engineered solutions tend to have limited complexity or parts with limited complexity that can be evaluated on their own. No one can certify that a 1M+ line codebase is free from fatal flaws no matter what the test suite says.
There are currently decades of safety margin in basically all running code on every major OS and device, at every level of execution and operation. Sandboxing, user separation, kernel/userland separation, code signing (of kernels, kernel extensions/modules/drivers, regular applications), MMUs, CPU runlevels, firewalls/NAT, passwords, cryptography, stack/etc protections built into compilers, memory-safe languages, hardware-backed trusted execution, virtualization/containerization, hell even things like code review, version control, static analysis fall under this. And countless more, and more being developed and designed constantly.
The “safety margin” is simply more complex from a classic engineering perspective and still being figured out, and it will never be as simple as “just make the code 5% more safe.” It will take decades, if not longer, to reach a point where any given piece of software could be considered “very safe” like you would any given bridge. But to say that “there is no way to add a safety margin to code” is oversimplifying the issue and akin to throwing your hands up in the air in defeat. That’s not a productive attitude to improve the overall safety of this profession (although it is unfortunately very common, and its commonality is part of the reason we’re in the mess we’re in right now). As the sibling comment says, no one (reasonable) is asking for perfection here, yet. “Good enough” right now generally means not making the same mistakes that have already been made hundreds/thousands/millions of times in the last 6 decades, and working to improve the state of the art gradually over time.
Part of the evaluation has to be whether the disaster was due to what should have been preventable. If you're compromised by an APT, no liability. Much like a building is not supposed to stand up to dynamite. But someone fat fingered a configuration, you had no proper test environment as part of deployment, and hospitals and 911 systems went down because of it?
There is a legal term that should apply. That term is "criminal negligence". But that term can't apply for the simple reason that there is no generally accepted standard by which you could be considered negligent.
Except nobody is asking for perfection here. Every time these disasters happen, people reflexively respond to any hint of oversight with stuff like this. And yet, the cockups are always hilariously bad. It's not "oh, we found a 34-step buffer overflow that happens once every century, it's "we pushed an untested update to eight million computers lol oops". If folks are afraid that we can't prevent THAT, then please tell me what software they've worked on so I can never use it ever.
An Airbus A380 comprises about 4 million parts yet can be certified and operated within a safety margin.
Not that I think lines of code are equivalent to airplane parts, but we have to quantify complexity some way and you decided to use lines of code in your comment so I’m just continuing with that.
The reality is that we’re still just super early in the engineering discipline of software development. That shows up in poor abstractions (e.g. what is the correct way to measure software complexity), and it shows up in unwillingness of developers to submit themselves to standard abstractions and repeatable processes.
Everyone wants to write their own custom code at whatever level in the stack they think appropriate. This is equivalent to the days when every bridge or machine was hand-made with custom fasteners and locally sourced variable materials. Bridges and machines were less reliable back then too.
Every reliably engineered thing we can think of—bridges, airplanes, buildings, etc.—went through long periods of time when anyone could and would just slap one together in whatever innovative, fast, cheap way they wanted to try. Reliability was low, but so was accountability, and it was fast and fun. Software is largely still in that stage globally. I bet it won’t be like that forever though.
Both sides have to go together. You have to put authority and responsibility together. In the end, we won't get better software unless programmers are given both authority AND responsibility. Right now programmers are given neither. If one programmer says no, they are just fired for another one who will say yes. Management finds one-sided disclaimers of liability to be cheaper than security. And this is not likely to change any time soon.
Unfortunately the way that these things get changed is that politicians get involved. And let me tell you, whatever solution they come up with is going to be worse for everyone than what we have now. It won't be until several rounds of disaster that there's a chance of getting an actually workable solution.