Hacker News new | past | comments | ask | show | jobs | submit login

> I should strive to never upgrade anything unless I absolutely must.

It seems that the choice is whether to live on the slightly-bleeding edge (as determined by “stable” releases, etc), or to live on the edge of end-of-life, always scrambling to rewrite things when the latest dependency library is being officially obsoleted. I advocate doing the former, while you seem to prefer the latter.

The problems with the former approach are obvious (and widely seen), but there are two problems with the latter approach, too: Firstly, you are always using very old software which are not using the latest techniques, or even reasonable techniques. This can even be considered to be bugs – like using MD5 hash for example, which, while being better than what preceded it, much software were using MD5 as a be-all-and-end-all hashing algorithm; this turned out later to be a mistake. The other problem is more subtle (and was more common in older times): It’s too easy to be seduced into freezing your own dependencies, even though they are officially unsupported and end-of-lifed. The rationalizations are numerous: “It’s stable, well-tested software”, “We can backport fixes ourselves, since there won’t be many bugs.” But of course, in doing this, you condemn your own software to a slow death.

One might think that doing the latter approach is the hard-nosed, pragmatic and responsible approach, but I think this is confusing something painful with something useful. I think that doing the former approach is more work and more pain from integration, and the latter approach is almost no work, since saying “no” to upgrades is easy. It feels like it’s good since working with an old system is painful, but I think one is fooling oneself into doing the easy thing while thinking it is the hard thing.

The other reason one might prefer the former approach to the latter is that by doing the former approach, software development in general will speed up by all the fast feedback cycles. It’s not a direct benefit; it’s more of an environmental thing which benefits the ecosystem. Doing the latter approach instead slows down all feedback cycles in all the affected software packages.

Of course, having good test coverage will also help enormously with doing the former approach.




> but I think this is confusing something painful with something useful

There is software out there that absolutely cannot break. Like "if this breaks, people will die". Medical software, power plant software, air traffic control software, these are obvious examples, but even trading and finance software falls into this category, if some hedge fund somewhere goes bankrupt because of a software bug real people suffer.

It doesn't matter how boring, inefficient and outdated these systems are. It doesn't matter how much pain they are to maintain and integrate. These are systems you do not fuck with. A lot of times people who do maintenance of these don't even fix known bugs in order to avoid introducing new ones and to avoid the rigorous compliance processes that has to be followed for every release. Updating the software just to bump up some related library to the latest version is simply not a thing in this context.

I am not working on anything like this. I work on a lighting automation system. If I fuck up my release, nobody is going to die (well, most likely, there are some scenarios), but if I fuck up sufficiently, a lot of people will be incredibly annoyed. So I have every version of every dependency frozen. All the way down the dependency tree. I do check for updates quite often and I make some effort to keep some things current, but some upgrades are simply too invasive to allow.


> I am not working on anything like this. I work on a lighting automation system. If I fuck up my release, nobody is going to die (well, most likely, there are some scenarios), but if I fuck up sufficiently, a lot of people will be incredibly annoyed. So I have every version of every dependency frozen. All the way down the dependency tree.

Most people’s systems are not that special that they absolutely need to do this, but it feeds one’s ego to imagine that it is. And, as I said, it feels more painful, but it’s actually easier to do this – i.e. being a hardass about new versions – than to do the legitimately hard job of integrating software and having good test coverage. It feeds the ego and feels useful and hard, but it’s actually easy; it’s no wonder it’s so very, very easy to fall into this trap. And once you’ve fallen in by lagging behind in this way, it’s even harder to climb out of it, since that would mean upgrading everything even faster to catch up. If you’re mostly up to date, you can afford to allow a single dependency to lag behind for a while to avoid some specific problem. But if you’re using all old unsupported stuff and there’s a critical security bug with no patch for your version, since the design was inherently buggy, you’re utterly hosed. You have no safety margin.


This is the danger of living on the edge of EoL, as you called it. Once you are forced to update a packages, usually with short notice at the most inconvenient time ever due to some zero-day vulnerability found in one of your depdendencies. Then the new version no longer supports another old version it has a subdependency to, which you also pinned, so you have to update the subdependency also. And to upgrade that package you have to update yet another subdependency, and so on.

Suddenly a small security-patch forces you to essentially replace your whole stack. If your test flags any error you have no idea which of the updated subpackages that caused it, because you have replaced all of them. Eventually you accumulate so much tech debt that it's tempting to cherry-pick the security patches into your packages instead of updating them to mainline, sucking you even deeper down the tech debt trap.

Integrating often means each integration is smaller, less risky and easier to pinpoint why the failure happens. Of course this assumes you have good automatic test coverage, which i assume you do if the systems are as life-critical as parent claim them to be.

There's also a big difference between embedded and connected systems here. Embedded SW usually get flashed once and then just do whatever they are supposed to do. Such SW really is "done", there is no need to maintain it or it's dependencies because it's not connected to the internet so zero-days or other vulnerabilities are not really a thing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: