Hacker News new | past | comments | ask | show | jobs | submit login
Convert Servers to Docker Containers (zwischenzugs.wordpress.com)
61 points by zwischenzug on May 24, 2015 | hide | past | favorite | 34 comments



While a cool hack, please don't do this in production.

Its perfectly fine to import physical servers into virtual machine images, but it completely defeats the purpose to dockerize whole servers.


Does it? While I'd consider myself a relative purist in that I would prefer container images that are as pruned down as possible, I also went through the process years ago of turning 20+ physical servers into OpenVz containers.

While you lose out on some of the benefits in the short term, in the long term you get the benefit that previously hard-to-decompose monolithic legacy setups can be sliced and diced gradually until you have units that are reasonable to start rebuilding cleanly. This one of those areas where layers etc. really are quite useful.

E.g. lets say you have a web server with umpteen different web apps. Turn it into a single container to begin with. Then export and create an image out of the OS and web server, and dissect each of the web apps into layers on top and separate it into one container per app. Now test a clean base image and try to layer each app on top of it, and re-deploy individually as you've confirmed it works.


> While you lose out on some of the benefits in the short term, in the long term you get the benefit that previously hard-to-decompose monolithic legacy setups can be sliced and diced gradually until you have units that are reasonable to start rebuilding cleanly. This one of those areas where layers etc. really are quite useful.

So. Much. This.

I work with very view greenfield projects. I work with a lot of existing, complicated, annoying monolithic architectures. Approaches that let me start refactoring those into something vaguely sane are a godsend.


This is exactly what really happens.


What purpose? Whose purpose?


Obviously this isn't a great way to use docker, but wow is it neat. I could see this potentially be used to make a copy of a running legacy server as a temporary solution or as an emergency backup.

Ah, there's a tool that does this in general, neat: http://devstructure.com/blueprint/


The tool is quite old, and haven't seen updates in some time.

https://github.com/devstructure/blueprint


Convert Servers to Services. Convert Services to Containers


This is completely missing the point of containerisation.

If you're dumping a server or a large bunch of applications inside a container you're doing it wrong.

I can't believe the number of people I've seen running practically whole operating systems inside a container.


> I can't believe the number of people I've seen running practically whole operating systems inside a container.

What's wrong with this?

I mean, technically, all containers are all about running the entire userland of an operating system; it just happens that we usually use very small values of "operating system" when we do this.

If you run a proper init system inside the container, you don't have to deal with the issues that come from having orphaned processes inside containers. And this ends up being a far more meaningful way of encapsulating components that are tightly linked into a single high-level unit.

The only reason that this is an antipattern with Docker is that the Docker runtime doesn't properly handle a lot of things like signal handling or process monitoring for non-init processes, which means you have to reinvent the wheel if you want to have multiple applications running reliably in a container. But that's a limitation of Docker specifically (and one that will hopefully be solved), rather than a fundamental premise of containers in general.


To me, it's about grain size.

If I want to run a whole suite of apps together with an operating system, I'd use VMs. They're a solid, well-established tool for that.

But what if I want to take the grain size smaller? What if I want to stop trusting my apps as much? What I if I want to stop worrying about improper or mysterious interactions between apps?

To me, Docker is an attempt to answer that question. I hope Docker doesn't make it easy to treat containers as just another kind of VM. I think that if they stick to their opinionated stance, it'll push software architecture in an interesting direction, as VMs did previously.


To provide an alternative perspective, the dogma that "one container = one program" looks truly bizarre to some of us on the outside who have been using containers for years and years at various companies.


I prefer to do this - you run entire application stacks inside your container (where it makes sense), rather than run 5 different containers to support the one application. Particularly if your application stack is immutable, it makes deployment far easier than trying to coordinate multiple containers (and versions)


This is fine for a development environment or even a very small application. But when you need to scale individual components of the stack differently, e.g Postgres, Redis and RabbitMQ, it doesn't make sense to scale the whole stack horizontally by simply starting a new container.

If you take a look at PaaS such as Heroku you are always getting a DNS address to these other services. In some cases they may be simply another scheduled container, but in other cases you may be using a database as a service platform similar to Amazon Dynamo.


It's interesting that I've heard this argument repeatedly from people in the Docker world, yet rarely from those who have experience of implementing it in multiple organizations (I do).

The philosohpical arguments drift away as the money is saved, and the code moves to a microservices architecture anyway as the rank and file 'get it'.

In any case, the intention is not to run everything in a container, but simply get from A (snowflake servers) to B (something useful in a container).

In short, if 'doing it wrong' saves your business a ton of money, then I'm not sure how wrong that is.


Because you are using wrong tools, LXCd?(is/was) written for exactly this purpose.


Do you mean this:

https://github.com/lxcd/lxcd

or something else?


I think he meant LXD, the daemon for managing LXC containers written by Canonical. See https://www.stgraber.org/2015/04/21/lxd-getting-started/. I guess that the point is that, compared to other solutions, Docker was specifically meant to run with only one service per container instance.


LXD would make more sense, yes.

The problem is those other solutions don't have traction. Also, LXD claims to be Ubuntu-specific:

http://www.ubuntu.com/cloud/tools/lxd


Granted, I was not arguing for LXD as a ready for mass deployment tool. And certainy docker at the moment is more popular. However I don't think LXD is ubuntu specific, see

https://github.com/lxc/lxd https://insights.ubuntu.com/2015/03/20/installing-lxd-and-th...

I've just compiled it on Fedora 22 and both the daemon lxd and the cli tool lxc seem to work, haven't tested any container however.


The point of containerization is to take services that conflict and run them anyway on one kernel. If you already know your services are well enough designed and packaged that they can coexist with just process boundaries, why add overhead?


From a security point of view, segregation is A Thing these days. From a sysadmin point of view deploying a "work unit" makes a lot of sense, with the "work units" deployed many places.


Yes, isolation is the goal, and sadly it's for reasons of deployability, not security.

This implies that the configuration tools (puppet, ansible, chef, whatever) are simply not good enough.


Most of the configuration tools we deal with try to bring the world from an unknown state to a sort-of known state with imperfect information.

They're full of absolutely awful workarounds for not having well contained state.


Indeed, which is why I invented this:

http://ianmiell.github.io/shutit/

Phoenix deployment makes far more sense to me.


And speaking of isolation/segregation ..

We already run apps in a vm (a jvm), on a vm (vmware). If we wanted to add security, we'd turn on the jvm's security manager.

Adding docker to the layers for security isolation seems a bit silly when you could turn on the security manager instead.

I love docker, but adding security is not the reason.


> The point of containerization is to take services that conflict and run them anyway on one kernel. If you already know your services are well enough designed and packaged that they can coexist with just process boundaries, why add overhead?

Multiple container overhead is insignificant when using lxc/docker containerization (compared to more traditional virtualization). Containerization exists to compartmentalize services for ease of management and service segregation; to not segregate services defeats the purpose.


Multiple container mental overhead is significant in lxc/docker containerization.

Grokking a system is a lot simpler if you can just exist in it.


Whose purpose?

My point is that technology's purpose is the use to which it is effectively put, not authorial intention.


Pick the "correct" (most efficient) tool for the job.


Yes, exactly: it depends on the problem you're trying to solve. They're not always striaghtforwardly technical.


Virtualization solves a very different problem, namely that some of your software depends on the wrong kernel entirely, so you want to pretend you have several machines without actually racking them.

Process boundaries also exist to compartmentalize services for ease of management and service segregation. Containerization just makes those boundaries slightly thicker, and only sloppy software that was trespassing over those boundaries gets any benefit, while all software becomes more hassle to manage with multiple copies of their dependencies.


I would say that it depends of how your application is built and what you want to do with it. Sometimes it does not make sense to create a container for each part of your application. It also depends how you are going to use your application. The application might be the main asset of the company or it might just be an external self-contained tool used sometimes by a few people.


I think you're likely missing the point of this.

If you have legacy bare metal setups, then moving them into containers by cleanly rebuilding from scratch might be what you'd like to do, but it's very often not what you have time to do.

Moving the legacy setups into a container as a first step to refactoring the setup works great.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: