Your approach is the logical one... But Docker currently has a limitation in how it handles removing files in a build. After each build step, the intermediary state is committed as a layer, just like a git commit. So removing files in a docker build is like removing files in git: they are still taking up space in the history.
The long-term solution is to support image "squashing" or "flattening" in docker-build.
A less clumsy short-term solution is to build a Docker image of the build environment; then 'docker run' that image to produce the final artifact. At least that way you get rid of the dependency on the host, which keeps your build more portable (if not as convenient as a single 'docker build')
Our approach is to view Docker as part of our overall development process and then develop stage specific containers.
For example, we have development containers, build containers and runtime containers. Runtime containers are further segmented into product demo containers, testing containers and production containers.
IMHO, a well designed approach to UnionFS layers is vital to high quality container architecture.
While we're focused on container use for databases (both in-memory and on-disk), much of our approach applies equally well to application layer containers.
Nice reference about https://www.projectcalico.org . At some point insanity of using ethernet on top of UDP to carry IP traffic between containers must stop.
Straight from the horse's mouth--admire your product, Mr. Hykes!
I love how you can run docker inside of a container. What I've done sometimes is run docker inside my build environment container. I use Docker Machine (OSX), so I just send the same machine environmental variables over to the container, but on Linux you could just link the socket file. In fact, I have a container just for Google Cloud that maintains my GKE config and makes it easy for me to prepare new deployments to the cloud.
> But Docker currently has a limitation in how it handles removing files in a build. After each build step, the intermediary state is committed as a layer, just like a git commit. So removing files in a docker build is like removing files in git: they are still taking up space in the history.
That's true unless you do the "everything in a single RUN statement" trick that is very popular.
Disclaimer: I work at Docker.
Your approach is the logical one... But Docker currently has a limitation in how it handles removing files in a build. After each build step, the intermediary state is committed as a layer, just like a git commit. So removing files in a docker build is like removing files in git: they are still taking up space in the history.
The long-term solution is to support image "squashing" or "flattening" in docker-build.
A less clumsy short-term solution is to build a Docker image of the build environment; then 'docker run' that image to produce the final artifact. At least that way you get rid of the dependency on the host, which keeps your build more portable (if not as convenient as a single 'docker build')