Can anyone give an overview of what LXD offers, and compare the experience of working with LXD to Docker? I've tinkered with BSD jails, Docker, and KVM, and I'm curious about LXD.
I'm not an expert, but this stuff is on the periphery of my work, so I try to understand most of what's happening in the space.
LXD is sort of a user interface and network API overlay for LXC. Basically, LXD is built on top of LXC to add higher level tools for managing containers.
Compared to Docker, LXC has a history of being used for full-system containers (e.g., a "virtual server" that you could log into as a user and do stuff with it like run a full web hosting system on with databases and other crap), while Docker has a history of being service-oriented (one service per container and you don't log into it as a user, except maybe for troubleshooting). Both use the same kernel features and either could be used for the other use case...but history didn't play out that way.
LXD, as I understand it, bridges that gap somewhat and people are using it for service-oriented containers too, but retains its use case of providing a full-system container. It also integrates with OpenStack in ways that I don't think were plausible with LXC.
It's written in Go (as most current container tools are), whereas LXC was written in C (as most Linux systems-level development was at the time).
There's a bunch of container management tools out there and a bunch of container APIs. This is one of them. They all do things a little differently and have different common use cases, but they all make use of namespaces and cgroups and other kernel features to provide a "contained" environment for stuff to happen in, by some definition of "contained".
The simplest explanation I like, is that these give you "machine containers" as opposed to docker "process containers". It's designed to deploy a full image, running Ubuntu (well, any linux distro) system with it's own init system, ssh, etc. In many ways it feels much like a Virtual Machine, though you do have some restrictions you don't in a VM (mostly with regards to use of in kernel services, such as mounting filesystems or using in kernel stuff like the iscsi target).
For most purposes and running most applications, you can create an LXD and login to it and it feels just like a VM but with no VM overhead and you can see all the processes inside container from outside the container ("on the host"), e.g. in top.
Yes, LXC is similar to OpenVZ and other containerization platforms that are oriented toward "thin" VMs; things that function just like VMs, but don't require the same resource commitment.
I think this is what many people think that Docker is, and what most people actually want, v. the application-centric approach offered by Docker.
Can you pass through a graphics card in anyway or would it need to be installed on the host? I'm thinking about making my main workstation a proxmox host, with the desktop environment a container.
There is still only one kernel, the host if you will. Since graphics drivers are kernel mode, it has to be loaded into the kernel. So VM style pass-through of PCI devices doesn't really make sense.
However, GPUs are increasingly supporting virtualisation and multiprocessing. For example, nVidia supports multiplexing a single GPU across multiple VMs for VMWare View. I'm not sure if there is special driver paravirtualisation magic happening but I suspect so. But there is at least some GPU process control and resource scheduling now, and I expect this technique will be accessible to linux in the near future.
At present you can map through the host devices created by kernel drivers into the container, and then access them via userland libraries. You are still dependent on the library versions and the driver version matching. This works for CUDA and OpenGL.
https://blog.simos.info/how-to-run-graphics-accelerated-gui-...
It is possible to do this but it's considered a bit non secure. So for your desktop it's fine, whether it's a good idea in a multi user environment is another story.
It also requires the drivers to be installed and loaded by the host.
Lots of problems with the multi user use case, such as many video cards can be used to DMA main memory and also potentially affecting the card permanantly in some way ahead of being used by another user later.
LXD is a framework around LXC. It works better with running a full os in a container than docker does. Docker can't, for example, boot and run a recent unmodified Ubuntu distro, due to issues with dbus and systemd. LXD/LXC can. And it can do it with non-priviledged containers.
It's probably more apt to compare it to Proxmox, Virtuozzo, or VMware, but leveraging containers instead of a hypervisor.
Proxmox and Virtuozzo both primarily operate on containers; Proxmox uses LXC and Virtuozzo uses OpenVZ.
With things like kernel same-page merging, LVM thin storage pools, and memory ballooning, the difference between "containers" and "VMs" is reducing to the point where the main difference is whether there's a software layer emulating the hardware or not.
Docker is the new kid on the block, and a bonfire of billions of VC dollars can create a lot of smoke.
Yes, I probably could have worded it better...the "but" was meant to apply to VMware.
Proxmox does both hypervisors and containers though...I wouldn't say primarily containers.
But that is a better crowd to specifically compare LXD to, rather than docker.
You can mess about and create a docker image to run a full OS, and you can also get LXD/LXC to work for docker use cases. But, you're swimming upstream in doing so.
FWIW: Virtuozzo is the commercial name for OpenVZ (+some commercial tools) and both use the same (their own) implementation for doing cgroups/namespacing
I find that it is most similar to freeBSD jails. Type a command to spin up a container and its just like having a linux VM I can SSH into. With its own file system and everything.
> I find that it is most similar to freeBSD jails.
That is a fair statement; however, the capabilities go way beyond those of FreeBSD jails. Lxc/lxd has UID/GID remapping, separate namespaces for container PIDs and the like, the option of unprivileged containers, and cgroups to organize resources.
If you click the link, there is a demo service to try out LXD through your browser.
It creates a shell and has a tutorial to follow with this new shell.
I was a heavy user of lxd and ended up switching to systemd-nspawn and never looked back. It was much more intuitive tp me, files were easy to access etc. Networking in particular is much easier (for me at least)