Linux as a platform is the equivalent of asking random volunteers to build a city, not telling them exactly how you want it built, and then hiring contractors to take what has been built and make it work for the majority of people. It's organized chaos.
I think the author is wrong that you need to enforce specific ways of doing everything, or that we need to even think about standards. The only thing you really need to enforce is a culture, which becomes an unofficial, de facto standard. I think Windows is a good example of this. Windows doesn't force you to do everything its way - it's just way, way more convenient. Linux makes it too convenient to do something different, so everyone does.
I think the solution for a distribution is the following:
1. remove all unnecessary software
2. remove all methods to easily install new software automatically
(reserve that function only for updating existing software)
3. make all base software follow a set of guidelines which are incredibly
convenient to adapt and follow by new applications
The result would be a culture of convenience that would enable more compatibility by simply making it easier for everyone to do the same thing. My apologies if the author proposed this very thing; it was too long and I didn't read it all.
I hate to be so callous about this but this article is really just a dev shitting on ops people because they either don't understand or are intentionally omitting the reason why sysadmins are put between devs and production.
> Linux distribution developer tells application and system developers that packaging is a solved problem, as long as everyone uses the same OS, distribution, tools, and becomes a distro packager.
There is no such thing as Linux. There is Redhat Enterprise Linux, Ubuntu Linux, OpenSUSE Linux, Debian Linux, ArchLinux, etc.. These are systems with a loose set of tools in common but differ to the point of absurdity in organization, structure, availabile software, and capabilities.
Supporting multiple distributions should be viewed as supporting multiple platforms. You can either port your software or bundle all your dependencies, all the way down to libc if you want.
> making sure that the overall OS fits into a fairly specific view of how computer systems should work
Which is another name for 'software is incredibly complex and conventions are needed if you want to stay sane'.
Quick, where do web applications go? /srv/myapp, /opt/myapp, /usr/share/webapps/myapp, /usr/share/myapp, /var/www/myapp, /var/myapp/, /myapp, /usr/local/myapp? If you answered all of this above you've used 3rd party software on a Linux system. Where do you think the logs for all those locations wind up?
Packaging is the art of beating software into submission so that it can actually be managed, secured, audited, and monitored.
> You're still supposed to go ask your sysadmin for an application to be available, and you’re still supposed to give your application to the sysadmin so that they can deploy it — with or without modifications.
This is PaaS which isn't mandatory but highly suggested when dealing with teams of developers. You write the code until it works, we deal with making that new library you want that was released two weeks ago work, deal with the licensing of your APIs, libraries, drivers, and tooling, and integrating it into your CI/CD pipeline. You just write the code.
> to convince the developers of...
Because that's not how it works, we take your source distribution model, pip, bundle, npm (i.e upstream) and turn them into packages. There's no convincing to do because they're not in conflict.
> The issue is not the 'managed by somebody' part; the issue is the inevitable intermediary between an application developer and an application user.
This is what it's really about. Devs don't understand what ops people actually do and just see them as a team that slows them down and tells them no a lot.
> developers are heavily based on the concept of a core set of OS services; a parallel installable blocks of system dependencies shipped and retired by the OS vendor
Otherwise known as a the packages your distribution ships.
> How did foo get into the repository?
Because a distribution vendor thought your software was useful, packaged it for distribution, and put it in their repository. At this point it is NO LONGER your software or your responsibility. The vendor has adopted this package and committed ensuring that it works and is maintained. People with problems should (and do) complain to them rather than you.
> How can you host a repository, if you can’t, or don’t want to host it on somebody else’s infrastructure?
You make a repository and host it on your infrastructure. I promise it will be the easiest web service you've ever configured.
> What happens when you have to deal with a bajillion, slightly conflicting, ever changing policies?
Depends, if you're distributing it yourself then you treat it like cross platform work. If distributions are packaging them for you, then you sit back and let them do it. If your IT dept is deploying your software then you let them handle it.
> How do you keep your work up to date for everyone, and every combination?
You push an update (i.e copy a file) to your repository, then people download and install it. Again, if you're opting for either distro maintainers or IT dept you just publish your code and let them deal with it.
> What happens if you cannot give out the keys to your application to everyone, even if the application itself may be free software?
You distribute the software (myapp) and (myapp-license or myapp-api-keys) in a separate package and mark the keys as a dependency.
> Scalability is the problem; too many intermediaries, too many gatekeepers.
There aren't any gatekeepers. There are people are doing work for you. You are totally in control. You (or your org) can opt to have some of that work done for you. You can't pretend that it doesn't need to be done at all.
There aren't any gatekeepers. There are people are doing work for you. You are totally in control. You (or your org) can opt to have some of that work done for you. You can't pretend that it doesn't need to be done at all
Soon are coming the days in which developers produce Docker images as their primary artefact and Ops are responsible just for k8s, DC/OS or whatever runs them, without knowing or needing to know what's inside those containers. At first developers will be ecstatic, they can use any version of anything, never have to get anyone else to install anything first, brilliant!
As a developer this is exactly what I want. It is part of having ownership and is critical to writing and maintaining good software. Containers are great for this reason as they give me control of the complete runtime environment of my application so I can better support it.
And it makes those 3am calls less frequent, not more. I've always been on call for my software and I thought this was now the norm and the throw-it-over-the-wall process had mostly died out.
That's refreshing to hear. I've done both jobs, and I'll tell you what I've observed: if you were to go over to a developer and say "there's a bug in the program" they would look at you as if you were an idiot. What program? And when you do what?
But that same developer will go to ops and say "the server is slow", seemingly entirely unaware that the person they are talking to supports hundreds of apps running across thousands of servers. I think many developers are in for a bit of a surprise when they discover DevOps means "you do ops too now".
> There aren't any gatekeepers. There are people are doing work for you. You are totally in control. You (or your org) can opt to have some of that work done for you. You can't pretend that it doesn't need to be done at all.
That's pretty much the heart of the matter right there. One person is complaining that the rest of the world isn't doing enough work to make the need for that work invisible to him.
> Quick, where do web applications go? /srv/myapp, /opt/myapp, /usr/share/webapps/myapp, /usr/share/myapp, /var/www/myapp, /var/myapp/, /myapp, /usr/local/myapp? If you answered all of this above you've used 3rd party software on a Linux system. Where do you think the logs for all those locations wind up?
This is a large part of why so many people love containers. I don't care where my app is installed as long as it's inside the container image. And logs come to the container's stdout.
> I promise [a package repository] will be the easiest web service you've ever configured.
I host my own package repository, and the tedious part is that I need to do everything on my notebook since I don't trust any server with the signing key. Not saying that it's particularly hard (it's definitely not), but it's tedious. I cannot automate the package-building part even for sources that I control because the packages need to be signed by me as the very last step of the process.
I don't have much to add to @Spivak's comprehensive response, except this one thing:
IF we agree backend services should be freely developed in multiple languages with multiple frameworks or "middleware" as the needs of the problem dictate, THEN we pay for that flexibility through the total effort required to deploy, secure, operate, enhance and maintain the overall system.
Engineering teams need to weigh the benefits vs the costs in that tradeoff. The good news is that the current wave of tooling is converging in a place where the so-called "system administrator" can express platform constraints in configuration or code that is visible to development, and likewise the developers can express the software requirements in configuration and code that is visible to ops. That transparency means (in principle, albeit imperfectly in practice) that conflicting needs surface more quickly and the team has access to the complete picture; therefore possible for the full team to learn from that view.
In every startup I've run, the necessity to build and deliver quickly leads to a pragmatic choice of a subset of languages and frameworks. Narrowing assumptions allow the team to concentrate on optimizations (of staffing, of code, of process, of tooling) that tend to accelerate things overall.
Is "pragmatic choice" a euphemism for "too many layers of intermediary getting in the way of progress"? I don't believe so. Any artist will be familiar with the necessity of constraints, e.g. "The enemy of art is the absence of limitation" (Orson Welles), "Art lives from constraints and dies from freedom" (da Vinci), "Man built most nobly when limitations were at their greatest" (Frank Lloyd Wright).
I would submit - don't fight this tension as if it's an evil that needs to go away, but rather accept it as a necessary and even desirable limitation that you can use to your advantage.
I think the author is wrong that you need to enforce specific ways of doing everything, or that we need to even think about standards. The only thing you really need to enforce is a culture, which becomes an unofficial, de facto standard. I think Windows is a good example of this. Windows doesn't force you to do everything its way - it's just way, way more convenient. Linux makes it too convenient to do something different, so everyone does.
I think the solution for a distribution is the following:
The result would be a culture of convenience that would enable more compatibility by simply making it easier for everyone to do the same thing. My apologies if the author proposed this very thing; it was too long and I didn't read it all.