Hacker News new | past | comments | ask | show | jobs | submit login
Just what we need. Another package manager. (standalone-sysadmin.com)
45 points by MPSimmons on March 18, 2014 | hide | past | favorite | 65 comments



I don't know about anyone else, but I treat my system and language specific package managers completely differently.

My system package manager has root access, it can make or break my machine. As a user, I am trusting the distribution I use and it's package maintainers. Package vetting, stability testing and signing are what I expect from my distribution.

For language specific package managers, those things would be nice, but completely unreasonable to expect. There is no trust involved, how can there be? Most package repositories have no vetting process, it's publicly writeable.

For python, there is virtualenv. Packages are "installed" in their little environments with user privileges. For node, I personally have a dir in home for modules and then ln -s cli tools into ~/bin. Again, all with user privileges.

The crazy thing is, for some people there is no distinction. In fact, I noticed a trend in the node community in that they all give instructions to install their modules globally. Literally every single installation instructions I have seen for node cli tools have said the same thing, install globally.

This is pretty baffling. If you were on a Windows machine, would you download some random setup file from a public ftp and run it as administrator? I don't know why an entire community (of power users and developers no less) seem to think it's somehow acceptable practise.


>This is pretty baffling. If you were on a Windows machine, would you download some random setup file from a public ftp and run it as administrator?

yes. that's the general practice under Windows. As a clueless end-user you also often get the original software wrapped in "experience enhancing" adware installers - provided you actually find the correct download link - the download pages of the various sites are littered with ads that contain fake "download now" buttons that install various PC "cleaning" utilities (also wrapped in adware installers themselves)


He was talking about developers so I guess a good comparison would be versus NuGet.


If you're a C or C++ dev your system and language package managers are usually the same thing.

And even in your case I would argue you don't need several different package managers, merely several different environments.

It's just a matter of having different databases/install paths depending on what you're trying to do, you don't need a whole new packager.

That would fit within the unix philosophy of having "one program that does one thing and does it well" instead of having a hundred package managers, each with its share of bugs and quirks and unique features.


> If you're a C or C++ dev your system and language package managers are usually the same thing

And this is a huge mistake that needs to be rectified. I can't tell you how many times I got fucked by "well program X uses libcurl 3.4.4 but program Y uses libcurl 3.4.2 and guess what, Y doesn't link to a specific version but it breaks with 3.4.4, yet insists on building with that one." So I have to go in and change build scripts manually (or rename files or something) to get it to work. Not having a way to isolate dependencies when needed and specify them in a fine-grained manner is a huge problem when dealing with any non-trivial codebase. Virtualenv / pip / requirements.txt is a ridiculous lifesaver in this sense. You can just push to any machine and it'll work regardless of the state of already installed packages on that machine. That's what this is all about - not having to care about state.

I do agree with the general idea of "well if we had a package manager standard to be adhered to and everyone used that".


Is this actually true? It seems to me that wget+configure+make is the corollary to gem/pip/npm/cargo for dependencies on active development, while everybody uses apt-get/port/brew for system packages.


I believe that what they are trying to say is that, if the correct libraries aren't installed, there would be a failure at the configure point wherein you would then spend some time with the system configuration manager and get the required libraries.

This is all well and good assuming that your system repository has the library you need (let alone the exact library you need). However, when developing new software you generally need to link with something newer than what the system package manager can reasonably provide and being that most system level package managers were created before the days of GitHub you are stuck in a recursive loop of downloading and configuring.

Whenever I get 3 deep into a make loop like this I start pining for a package manager to sort it all out for me.


Yeah, that was pretty much my point. Instead of using gem/pip/npm to download, build, and install things in an accessible place, you use wget to download things, configure and make to build them, and make install to install them.


This is actually possible to avoid! It's just a crapload of work and your tools will fight you every step of the way.


>Literally every single installation instructions I have seen for node cli tools have said the same thing, install globally.

You must not use node very much, then. Generally, IF there are instructions, which there often aren't because its so obvious, it's because the package installs a bin you would want globally available. Most instructions don't exist, or tell you what to put in your package file


You're right, I'm relatively new to node. This is exactly why I have been reading the manuals of various packages. Apparently, I'm the only that does here.

You are just flat out wrong to say that projects don't say to install globally. Just after a super fast search of some well known packages:

Express npm install -g express-generator@3 https://www.npmjs.org/package/express

Mocha $ npm install -g mocha http://visionmedia.github.io/mocha/#installation

node jslint npm install jslint -g https://github.com/reid/node-jslint

Yeoman npm install -g yo http://yeoman.io/

Bower npm install -g bower http://bower.io/

JSHint $ npm install jshint -g http://www.jshint.com/docs/

Grunt npm install -g grunt-cli http://gruntjs.com/getting-started


I kind of agree, though those are all development tools which shouldn't influence packaged apps.


>it's because the package installs a bin you would want globally available

Everything in that list.

You conveniently ignore that the express guide tells you to add express to its package file, express-generator is a separate utility. No one installs libraries globally.


I'm not "conveniently ignoring" it, it's just the distinction is irrelevant. You are running npm as root and installing packages.

Even if you completely ignore the security issues (which was my entire argument to begin with that you ignored). "Globally available" in this context really just means "put in your PATH", which can easily be accomplished without giving a bunch of random javascript tools root access to the machine.


There sure are a lot of results for "npm install -g" in GitHub, some of them are the recommended way to install a module.


It's a terrible practice; don't do that.

That's why every node project has a file called 'package.json'; it's so you can run 'npm install' and install into your local directory with local privileges.

...I'm really not sure who you've been talking to.

(To be clear this is specifically the way npm is designed to work, and it's very good at it; using npm as a global package manager is flat out stupid; maybe you're thinking of gem...)


sigh "Nobody" uses gem that way either, for the same magnitude of "nobody". Why do people feel the need to be willfully ignorant of or dishonest about the tools that they haven't personally chosen to use? It's really stupid.

Edit: I will say that npm is much better than gem at the common project-based usage pattern, and it's even a little nicer than gem+bundler, in my opinion. But regardless, installing gems to the system has been uncommon for quite a few years.


Sadly, and ironically, unless you use npm and grunt; in which case, the grunt plugin will usually require a global install of (for example) compass, sass, premailer, etc.


I haven't "been talking" to anyone. I've been reading the quick start or installation instructions for major node projects and lots state to install globally. See my other comment.


It was inevitable that there will be a package manager for Rust. Packaging, versioning and distributing programming libraries still is an unsolved problem.

The requirements for an OS package manager are very different to one that is used for installing libraries to your development environment. Things move relatively slowly and there's less need to get bleeding edge versions included.

All of these programming language specific tools have their specific needs when it comes to version and dependency management. In contrast with OS package management, there is frequently a need to have several versions of a particular library installed. Libraries are usually installed in a per-user or per-project sandbox rather than system wide.

As much as I wish there was one package manager that could serve all these needs, I don't see that happening in the short term. The situation where we have half a dozen popular OS package managers and one (or more) package management systems for each programming language is less than ideal, but trying to unify all of those would be quite an effort. That would need getting the correct people sitting around the same table. And the end result would be a compromise of some kind.

I hope this happens but I don't know who would put the time and the effort to do it and what it would take for it to gain traction.

Question to the OP: which package manager would you have picked for Rust? You point out a lot of problems in the post but don't come up with obvious solutions.


Op here.

I'm not sure what the right solution is.

Each of the major package management solutions have provided ways to talk to independent repositories. I think that, in some ways, it makes a sense for a language to maintain repositories for each of the major OSes. This isn't without problems, of course, because then instead of writing software, you're spending time packaging and testing that, when you could just make a ruby gem and be done with it. Which is what happens now.

Honestly, even a way to conclusively enumerate the installed packages, versions, and sources of each package would be an improvement. That way, I could at least be reasonably assured of recreating the environment.


    I think that, in some ways, it makes a sense for a language to maintain 
    repositories for each of the major OSes.
Specifically with regard to rust, what exactly does that mean?

Are you suggesting that if someone writes a rust library they should publish an .rpm, a .deb, an insert-here for osx, and an installable .exe (or nuget package) from a centralized repository? Or push them to the upstream providers for each distribution?

Or something else entirely?


I can't agree that it makes sense for a language to maintain repositories "for each of the major OSes". In general, those modules are OS-independent, that's the whole point - no matter if I'm on Ubuntu or Windows or some-obscure-definitely-not-major OS, I should have the same repository with the same download; and I can distribute a package that will be available and work on MacOS without needing a Mac.

This also means that OS-specific package managers aren't usable - the same package manager should work the same way regardless of which OS is used.

If I have a dropbox folder with my project and it's package requirements, then for most modern languages the exact same folder simply works on my Mac, Win and Linux computers.

"conclusively enumerate the installed packages, versions, and sources of each package" - most package managers do so. Many allow a project to define not only the package needed, but also the package version; and they are able to automagically download and use package version 1.2.3 for project A, while keeping the 2.0.0.alpha version of the same package available for project B.


> Honestly, even a way to conclusively enumerate the installed packages, versions, and sources of each package would be an improvement

I feel like there are usually ways to do it, you just have to really go out of your way and it's not normally the default or even recommended way of doing things (which is a problem, to be sure).

Using the puppet example, the forge is definitely terrible, the published "versions" are often not tagged or tagged differently on github, modules rarely have been updated with bug fixes from the past 2 years, etc. But most of the sources are on github so you can fork them, tag with symantic versions, and use something like librarian or r10k to enumerate those versions and deploy your environment.

Of course then you're kind of using your own custom system which can have its own problems, but you can be reasonably sure that for a particular project you could re-deploy exactly the same modules as long as you have that Puppetfile.


As I've argued before, language-specific package managers are evil. Unfortunately, so are system package managers that are too cumbersome or can't/don't keep up with what's going on in the faster-moving language communities. This results in projects being packaged by people who don't actually understand it, while the people who understand it don't (and shouldn't need to) understand the minutiae of how to package things for a particular platform. This in turn causes far too frequent breakage.

What we really need is system package managers that can cooperate with their language-specific brethren to get info about packages under other managers' control, direct other managers to install something according to its own rules/methods, and so on.

"Hey apt-get, please tell me the status of this Ruby package" <<apt-get turns around and gets the info from gem>>

"Hey yum, please install this Go package" <<yum turns around and tells go to do it>>

The rules for how to talk to each language-specific package manager shouldn't even need to be very complicated. The real work would be getting all of them to use a common format for talking about versions, file lists, dependencies, etc. It would be worth it, though, to have those dependencies tracked properly across all languages/formats instead of being lost at each boundary.


Ha, I knew this would already be here: https://news.ycombinator.com/item?id=7421759

I do like that your idea isn't "build something perfect" but rather "teach the imperfect things how to talk to each other". Could be very neat. Not sure what it would look like.


> Please, accept that a tool someone else wrote that you see as imperfect may actually solve the problem, and know that the world may not need another solution that does the same thing.

For Rust specifically, can you suggest any?

(I can't think of any, but that doesn't mean there aren't any.)


rpm, dpkg, windows MSI file, .App folder on Mac. These will all work with any language that you want, plus, you get the added benefit of standardized placement and having the USERS of each platform know what to expect when they install whatever you've decided to throw on their machine.


My favourite solution to this would be an APT extension which allows installation of binaries into $HOME by unprivileged users and for all these language-specific things to be turned into simple APT repositories.

I can still have dreams, right?


That is what I prefer. I won't allow another package manager to run under sudo. Either it must be installed in user home or be installed system wide via my linux box's package manager.


You can probably accomplish that today with fakeroot.


At the dpkg level, this mostly works, yes. I can extract to e.g. ~/local, set $PATH, $LD_LIBRARY_PATH etc. and stuff works.

However, I'd also like to have some dependency resolution (ideally including already system-installed packages where possible, no need to install libc6 twice) and some more magic - for example, I have to manually adapt the file /etc/bash_completion if I want it to work out of ~/local, similarly, many programs only look at /etc/foo and not ~/local/etc/foo.

fakeroot doesn't really help and chroot again requires root privileges.

As I said, dreams :)


The path to a standard package manager starts with a standardized protocol for package management.

A service protocol that is able to serve a repository of packages over http and ftp. A client protocol that can keep track of installed packages and can index, search and look for updates on installed packages.

Split package management into layers and only try to standardize bit by bit. People will never agree on deb vs rpm. People will never agree on using json vs python vs Makefile vs ruby vs shell vs whatever else - they'll always want their most familiar language for their package manager, which in domain-specific packaging means the domain-specific language.

So don't try to standardize those. Standardize the rest. Give us the protocol that can power all of this and increase interoperability. Separate the repository layer, the package format (deb, rpm), the packagefile format (setup.py, Makefile, PKGBUILD) and the package manager (interface: yum, apt-get, aptitude, pip, npm) from the rest of the protocol.

Make this potentially usable for things such as browser extension repositories, android package management, vim bundles and what not.

Someone please work on this. I'd do it but it just occured to me I have to clean my oven.


Yes, I think this is the right approach. Most package managers use the same command functionality under different synonyms. I don't mind all the different applications so much as the lack of a standard that they are built too.


My knee-jerk inclination to this post is to yell, "oh holy hell, yes!"

That said, and as others in this thread have noted, there are actually two use cases that need to be satisfied.

1. Here, you've got a base system, and you want to install some piece of software in order to use it. You want this to be guaranteed, for some reasonable definition of "guaranteed," to work with your existing base system.

2. Here, you want to install packages within a segregated environment, and you want those packages to work with any packages previously installed in said environment. You're probably attempting to do something like recreating your deployment environment locally.

It strikes me that there are only two issues preventing the latter from being subsumed by the former.

1. Not all package management systems provide a means to have multiple versions of a package/runtime/what-have-you installed at the same time. Often, this capability is there , but packages need to be specially crafted (unique names, etc.) for it to work. See Debian's various Ruby and Python runtime packages for example.

2. Not all package managers provide a way to install a set of specific package versions in a contained environment which is segregated and requires intention to enter.

(Note that I'm ignoring the "there are different package formats" issue; I don't think is in practice a huge barrier, and the package maintainers should be involved anyway.)

If we could get RPM and YUM to provide those services, then we could remove the vast majority of this duplication.

Alternatively, if we all agreed that developers should just use Linux containers as development environments, then all we'd need is upstream to use native OS packages (which is, really folks, not very hard).

Can we do that pretty please??


So how do I rebuild the compost heap infrastructure that I used to build my environment?

This.

Has anyone ever tried https://github.com/jordansissel/fpm FPM yet?


Yes, I use FPM quite a lot to make debs for any ad-hoc installations on Ubuntu. It's a lot easier than the actual Debian packaging tools!

E.g.: https://github.com/threedaymonk/packages/blob/master/go.sh


Yes, and I thank $deity that he made it. I don't have to use it often, but when I do, I'm glad he made it.


Ha! I tried to do something like this once as well, but lost steam: https://github.com/benburton/huff


> Suppose I used entirely off-the-shelf puppet code. Nothing custom, just modules I found. And I erased my repo which contains my puppet modules. How would I rebuild it and get the same thing that I had before?

Well, there's Blueprint (http://devstructure.com/blueprint/) which purports to reverse engineer servers and spit out Chef / Puppet modules.

But... I'm not sure I understand the question. It seems akin to asking "I deleted all of my source code, how do I rebuild what I had before?" That's why we have version control. That's why we have backups.

I also don't understand this rant in the context of Rust and its Cargo package manager. There are several distinct domains involved, and it seems pretty reasonable for each to have its own management tool.

Puppet, Chef, Ansible, or Salt for handling machine configuration. Yum or APT for handling system-level packages and services. Pip, Gem, NPM, or Cargo for application-level dependencies. Seems pretty reasonable to me.

If you need it to instantiate brand new machines, you can get into VMs (VirtualBox / VMware) or containers (Docker), each of which can also be trivially scripted (Vagrantfiles / Dockerfiles).

The whole array of tooling seems more complementary than competitive.


> Yum or APT for handling system-level packages and services. Pip, Gem, NPM, or Cargo for application-level dependencies.

That's the thing: what is the distinction between "system-level" and "application-level"? Has it really gotten to the point where the only thing we can use /usr/bin/python for is to run other things in /usr/bin? This may very well be the case, but it strikes me as slightly strange given that we never used to be afraid of linking against /usr/lib or running our scripts under /bin/bash.

What happened in the past 10-15 years that changed the world so much that whereas before we ran our applications on top of the system, now we seem to want to run them in individual sandboxes, often inside of other sandboxes inside of other sandboxes? Was it really so bad to yum install and gcc -lwhatever without having special paths everywhere for everything?


> what is the distinction between "system-level" and "application-level"?

I think it mostly comes back to scope or encapsulation. I expect my host to provide facilities external to my program (databases, sendmail, a webserver), and I expect to have control over the software libraries internal to my program.

Perhaps you're also asking why APT or Yum, which are great at managing system-level package availability and versioning, couldn't be adapted for local use by applications. I'm not sure there's a good answer for that. Maybe it's just portability? If BlubLang runs on 9 platforms, then BlubPM needs to run on those same 9 platforms. It's probably easier to get there if BlubPM is written in BlubLang.

> What happened in the past 10-15 years that changed the world so much that whereas before we ran our applications on top of the system, now we seem to want to run them in individual sandboxes, often inside of other sandboxes inside of other sandboxes?

VMs? I once heard a terrible analogy to the effect that we used to treat servers like pets: we gave them cute names and we nursed them back to health when they got sick. Now we treat them like livestock: if one gets sick, you shoot it and get a new one. There's a hilarious send-up of hand-maintained systems in the "DevOps is Ruining My Craft" article at http://tatiyants.com/devops-is-ruining-my-craft/

The more you isolate the applications from the host, the easier it is to redeploy them.


I have run into this "system-level" versus "application-level" problem before. Maybe my OS's package manager installed foo-fred.tgz but then foo has its own manager and it wants to install a different version of fred, but I have absolutely no idea who maintains it, besides it magically pulling stuff off the webtubes from God-knows-where. And how do I verify the integrity of those packages?

It's a nightmare. I just plug my nose and dive in and hope that whenever things go really bad it won't be on a day when I'm busy.


Have you ever worked with other people who use different OSes, or need different versions of libraries installed?


I agree, it does harm.

In principle if everyone used their distro packages for things like...say, Wordpress, we wouldn't have as many vulnerable installations on the web (see: NHS). How many people actually use the wordpress package from their distro rather than just uploading a private copy to their webdoc root?

Instead blog admins have to log in to their control panel and perform a (hopefully working) auto update there, and then have to shell in to upgrade other important things like PHP.


Have you ever seen how Debian et al package web software? According to the LSB FHS, and that's not only a lot of additional work that requires testing, and goes against how e.g. WordPress tackles the update functionality on their own.


Looks pretty reasonable to me, they even provide a setup program of some kind and what is presumably default htaccess rules.

https://packages.debian.org/sid/all/wordpress/filelist

for comparison:

https://www.archlinux.org/packages/community/any/wordpress/

What's wrong with it?

The fault is that of PHP developers for not seeing their distributables as system software.


Fine, don't use it. But you probably will because after fighting it you will likely find it makes you life easier.


Package management is a fractal problem; look at it from a high level and it all looks simple and they all look similar... zoom in and the similarities start falling away.

It's probably theoretically possible to build a meta-package-manager that really could make everybody happy, but it's difficult to imagine what project structure could get us there, and it's also difficult to imagine how to incrementally develop such a thing in a way that it is immediately useful to everybody. Without that you've got a barrier to deal with.

If you view an individual language package manager as essentially creating a container for the code to run in, a combination of Docker plus the Nix package manager is probably getting pretty close to what everybody needs, but you'd still have a long row to hoe getting everybody even remotely on board.


Captain metaphor here ... Isn't what you describe (a phenomenon which looks distinctly different at different levels of zoom) the opposite of a fractal?

Fractals are typically described as being self-similiar, i.e. they look the same regardless of the zoom level.

Most things don't, which would seem to mean that package management is like many other things, more than it is like fractals. Many things that are different look alike when viewed from far away, since you don't see the differentiating detail.


Two variants of the Mandelbrot with slightly different settings will look very similar at the highest level, but have completely different zoom characteristics.

I have to admit that in hindsight I used a fractal metaphor that assumes entirely too much time spent fiddling around in Fractint.


Yup. Package management is basically a container manager containing files and hook scripts.

Docker? Container manager.

Virtualization? Container managers.


With regard to Rust specifically, you will always have the option of working like you do with C++: grab some binaries, and either stuff them into a location on your search path or just pass the "-L" flag to the compiler telling it where to look when linking. Cargo is not an attempt to create another walled garden, it's just an optional tool to automate dependency resolution and versioning external libraries.

That said, I agree that it's a huge pain that so many groups feel that the current tools are inadequate enough that they have to design and implement these sorts of things from scratch. I haven't looked much at 0install (http://0install.net/), but let's hope that something of its ilk saves us from this mess some day.



[deleted]


Hi, it'd be really awesome if you replied to post's actual argument, rather than attacking the author. The HN guidelines have some good suggestions for constructive discourse: http://ycombinator.com/newsguidelines.html


A huge part of the problem is that many of the language-level packages like .gem are incompatible with system packages like .deb. Some of this is due to the package managers and some of it is cultural. Rust is young enough that the culture is not frozen. Establish the culture that breaking API changes without increments to the major number is a showstopper bug, and that will help. Compare that with the Ruby culture, where Rails 2.3 introduced a huge number of breaking changes vs 2.2. Heck, there were breaking API changes in several of the 2.3.X releases. No wonder Bundler was created to lock versions down.


I wonder if it would be possible to build a meta-package-manager that works with all or at least a lot of the existing ones. The OP is totally correct in that having lots and lots of different package managers is insane. One major thing that is lacking currently is managing cross packet manager dependencies.

I don't believe this problem scales well enough to be possible to scale at a centralized point like a distro - there are too many different versions of too many libraries involved, so any solution must be decentralized. Nested support for namespaces would probably also be necessary to scale well.


Of course it's possible. I bet it even exists already in some dusty, little-used repository and someone will post it to HN in the next day or two.


Or two will be posted, with conflicting requirements, and each will get their own little band of followers that tell people asking questions on StackOverflow why their plugin isn't working "just trust MetaSlackOpkg and it will be okay."


IMHO the problem is that there is no standard package manager. Therefore everybody keeps backing custom solutions and fragment the ecosystem a little more.

If there was a standard package manager that wasn't tied to a particular OS/distribution then we could all just happily target it instead.

Of course the task of making a package manager that would work on all un*x flavours as well as Windows and probably a couple others and managing to get it accepted by the majority of users/distributions sounds like an impossible task to achieve.


Having been in a similar place myself, the solution is to host your own repos for packages and deployment config using Git. Never rely on the remote internet to be as consistent as internally-hosted code. Of course it'd be wonderful if you could do without, but somewhere you'll have to specify and track version numbers in a text file and Git's as good a way as any to track and tag that.


Let just all agree that Nix is the way forward and start migrating every linux distro, language repository and app to Nix expressions.

sigh I can dream.


Just linux distros? You think too small. :-)


I'd like to see something like Arch, Parabola or similar with packages that are source only, signed, minimal dependencies and just works. Oh and push fixes upstream.

(Glibc makes me sad.)


This blog post expresses little understanding of why these other package managers are used, or even how.

They are not supposed to be used to install things under (say) /usr at all. That is up to the platform's package manager. If you do that, it's not the tool but you who is being stupid. If this is a real problem for you, why are you putting the person who does this into sudoers on production machines - I assume or anyway hope it is not a real problem for you.

I have any number of command-line tools which depend on recent versions of things, and are not project-specific. I can install them under a prefix that is inside my $HOME.

Or when I make a project, I can make a sandbox where the tools get installed.

And I can deploy it in a sandbox, not haphazardly depending on system versions of everything.

None of these things I am doing to make my life as a developer manageable are harming your "architecture" (scripts to run package managers are architecture now?)

There are good use cases for platform packages. But every little thing should not be done with them. That adds up to a HUGE waste of time, with no real purpose.

When I make projects, I often need them to run against any number of different versions of things. But the most minimal requirement is to be able to use an actually-recent version of a dependency, where the platform you have dictated thinks recent means "6 years old." Neither of these can I do with the existing platform packages. So you demand that I make platform packages for every little tool and dependency I may use. Then, everything I want to use has to run against exactly one set of versions. So you are actually asking me to make my own whole-platform upgrades, or never use recent packages. That's not my job, it's not a sane way for me to do my job, and it actually doesn't benefit you at all. It reflects a really profound lack of understanding not only of my job, but actually also of your own job.

Then as an author of a library or tool I made available for free, you are demanding that to satisfy you I make packages for several different platforms, each with their own idiosyncrasies and versions of everything, and in fact that I package half the universe for each platform since most of them do not have recent versions of anything - or again only ever use 6 year old dependencies. All so that your job never has to go beyond running the One True Package Manager, whatever you think it is.

Then after all this unnecessary pain, your "solution" will require me to run an entire fresh VM for each new thing. Because you have stipulated that I have to dump all my dependencies for every project into one big sewer, it is guaranteed that there will be version conflicts. This is amazingly stupid because the tools already exist to avoid it very easily and people are already using them. And on your side, all you have to complain about is vague bruises to your feelings because not everybody is using the One True Package Manager for every little thing.

There is such a thing as a business listening too much to system administrators, when these system administrators do not at all understand software development, because their motivation is to prevent change and have less to do rather than to facilitate development of the business. When you blast this complaint at the whole community, you are asking for time to stand still and for nobody to develop anything new in the entire community. ALL forward progress must halt in order for you to have less work. It's not happening and it shouldn't happen.

If you are paid to be a system administrator, please perform your function as a system administrator. Please resolve these problems within the framework of your own company's division of labor. If you have to build packages for your production platform, I'm sorry but that's probably part of your job. If you don't like it, please whine to HR instead of whining to HN.

If you think it will be easy to make every language developer use the same package manager as every other language, it should be equally easy to make all the platforms use one package format instead of the arbitrary hell of different ones they are using now. That is going to be your best bet for actually making one true package manager. Good luck.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: