A problem with the lack of long term support is that you have to keep moving.
The most important part of "long term support" distributions is that breaking changes are kept to a minimum. If a "long term support" distribution releases an update to for instance glibc, you can expect that applying that update will change nothing other parts of your software stack might depend on.
The dependencies might be subtle; for instance, a new version of a database server might have optimized its query planner, which happened to make one particular query your software does a couple of percent slower, which led to it using a few more seconds to do its processing, enough to push it over the timeout limit for a different part of the system. So the only sane way to avoid breaking changes is to avoid all changes.
The opposite would be, as advocated in this post, frequent upgrades. "Frequent upgrades amortize the cost and ensure that regressions are caught early", but that means you are dealing with upgrades and regressions all the time. You arrive at work in the morning, planning to write a new feature; but an upgrade has just arrived, and it needs a few changes to your project. You develop, test, and deploy these changes; in the meantime, another upgrade has just arrived, needing more changes, to something you had changed just a few days ago. The day ends, and you didn't even start to develop the new feature. You use more time chasing the upgrade stream than doing productive work.
Long term releases "batch" the changes. When several changes affect one part of your software, you only have to deal with them once. Sometimes, you can even discard that part of your system and do something else, while with a continuous change stream, you might have wasted time adjusting little by little.
The most important part of "long term support" distributions is that breaking changes are kept to a minimum. If a "long term support" distribution releases an update to for instance glibc, you can expect that applying that update will change nothing other parts of your software stack might depend on.
The dependencies might be subtle; for instance, a new version of a database server might have optimized its query planner, which happened to make one particular query your software does a couple of percent slower, which led to it using a few more seconds to do its processing, enough to push it over the timeout limit for a different part of the system. So the only sane way to avoid breaking changes is to avoid all changes.
The opposite would be, as advocated in this post, frequent upgrades. "Frequent upgrades amortize the cost and ensure that regressions are caught early", but that means you are dealing with upgrades and regressions all the time. You arrive at work in the morning, planning to write a new feature; but an upgrade has just arrived, and it needs a few changes to your project. You develop, test, and deploy these changes; in the meantime, another upgrade has just arrived, needing more changes, to something you had changed just a few days ago. The day ends, and you didn't even start to develop the new feature. You use more time chasing the upgrade stream than doing productive work.
Long term releases "batch" the changes. When several changes affect one part of your software, you only have to deal with them once. Sometimes, you can even discard that part of your system and do something else, while with a continuous change stream, you might have wasted time adjusting little by little.
A somewhat relevant post from Joel on Software: http://www.joelonsoftware.com/articles/fog0000000339.html