> Run your applications in Docker from day 1 (with docker-compose it’s as valuable for dev as it is for production) and think carefully before letting your applications store local state.
I think this is the key take away for many startups. Get it so you:
1. Have a single-command way to bring up your entire backend
2. That command can run on your dev machine
3. You document all of the settings for your containers and deps
Once you have that in a docker-compose.yml file migrating to something like kube when you need health checks, autoscaling, etc is easy.
The only thing you must be critical of is bashing people over the head to make everything you run in your company run in this environment you've created. No special deployments. Establish 1 method with 1 tool and go from there.
Every company I've worked at I've brought a single command `docker-compose up -d` workflow and people were amazed by the productivity gains. No more remembering ports (container_name.local.company.com), no more chasing down configuration files and examples, no more talking to shared DBs, etc.
Yeah, docker compose is great for running distributed apps locally when everything “just works”. The problem is that it also impedes development when the compose setup isn’t fully optimized.
I work on dockerized rails apps, and whenever a new dependency is added, it can take 15 minutes to rebuild the image, which completely breaks my flow. Docker also creates a ton of data bloat locally, requiring me to run docker system prune pretty regularly. Setting up a dependency cache isn’t completely straightforward either.
I still think docker compose is worth it, but it’s a deceptively complex beast to get right, and most companies I’ve worked for don’t take the time to get it absolutely right. There’s definitely a cost to using it that people tend to overlook.
Split the dockerfile into 3 parts. builder image for gems. builder image for assets(nodejs and stuff). copy code from disk, gems from image one, assets from image two, into the third image. you will end up saving HUGE time
First, make sure you only copy your Gemfile and maybe Gemfile.lock before running bundle install. If you aren't that could explain the amount of bloat since every build will create some large layers with no chance of re-use.
You could try making your own base image that installs your gems. Eg, copy your Dockerfile to Dockerfile.base. Remove everything after the bundle install, and build to say app-base:latest.
Then change the FROM in your Dockerfile to app-base, and remove any OS package installs. Keep the bundle install; it will use the layer from app-base if unchanged, or do an incremental install if not.
This will reduce the amount of data bloat too.
You could try using guard-bundler or something similar to further streamline the process when adding gems
I'd suggest using targets within the dockerfile combined with buildkit. Targets allow you to build multiple images from one dockerfile or have multiple distinct stages that depend on one another. Using buildkit means you can build target dependencies in parallel.
Personally, I found having multiple intermediate dockerfiles more confusing than helpful, so having an easy to follow, centralised build made life a lot easier for me, especially if you produce multiple images that are pretty similar at the base.
At my current company, since I'm using gRPC, the build contexts for my containers are annoyingly large and I'm definitely feeling this. It's a massive pain and really annoys me but If I had to choose between 40 minutes of stressful debugging of someone else's config files vs 40 minutes of waiting for a command to finish, I'd take the latter.
Long term I'm planning on moving these builds to bazel and taking advantage of incremental builds for my entire monorepo. I'm in the final stages and just validating some stuff is in place before I do that. In the short term I've mitigated this by adding in heavy unit test coverage. Backend devs can basically run the unit tests for most services and are confident the service is working as expected and also enforces any/all security/auth checks. When developing I don't even start up the containers. I boot them up once at the end just to check my work.
1. My applications have no dynamic "linking"
2. The containers are distroless
3. Services can be built independently
4. I run the same exact sha in dev/staging/prod
To do what you're suggesting would require giving up one of my features which for me is not worth the trade off of a one-time setup 40min build of 20+ services (~2min/service). The last time I paid that 40min cost was ~2 months ago when I reformatted my hard drive.
The better solution for me is looking into a build system that can support my different language and tooling choices which happens to be things like Blaze (Bazel/Pants/Buck/Please).
If I understood correctly, this means that every time you add a new dependency, it takes 15 minutes to rebuild the first time you `docker-compose up` right?
I assume this is due to docker downloading the related image and running the container.
If this is the case, isn't this orders of magnitude faster than adding the dependency manually. And aren't the gains are even more significant when we factor the fact that most of the cases dev and prod environments are identical? Or am I missing something?
> this means that every time you add a new dependency, it takes 15 minutes to rebuild the first time you `docker-compose up` right?
Yes
> If this is the case, isn't this orders of magnitude faster than adding the dependency manually.
No. When developing a rails server running locally, you just need to install the new dependency via bundle install (usually takes just a few seconds) and then your server is ready to run again. With docker, the image needs to be entirely rebuilt, which is a much slower process, even with good caching. The difference is that docker treats the build as ephemeral and willing to be thrown away and rebuilt, as opposed to running directly, which simply augments the existing environment. It’s a much, much slower process to rebuild via docker.
Adding Ruby dependencies (gems) is pretty trivial. And the Ruby package manager (bundler) only installs the new dependency if you do it on your own machine. In Docker it will do all of them.
It's worth noting that bundle uses a Gemfile, so it's not installing each Gem as a seperate step.
There are a lot of projects these days that aim to use Kubernetes as a docker-compose environment. I personally use http://skaffold.dev/ with either a local Kubernetes cluster built with https://github.com/rancher/k3d or https://github.com/kubernetes-sigs/kind. I think there's a very easy argument to be made that says that running K8s locally is overkill, but what I will say is that if you run your applications locally in K8s, that's one step closer to having your local environment mirror the production environment. Couple that with things like running https://github.com/localstack/localstack locally and you get even closer.
`kind` and `k3d` are great but I don't think their aimed at being a developer friendly cluster from what I understand. They both require a lot of tweaking and setup to make them look like their a managed kube cluster (LB, storage, ingresses, etc). I wish there was something that could create a kube cluster that:
1. sets up automatic port forwards to my host system
2. handles ingresses by binding port :80 on my system
3. provisions PVs for you when you create PVCs and respects retention policies
4. Allows you to select from a monitoring stack and only require that you supply configurations for dashboards/alerts that can be 1:1 applicable to production so devs can debug dashboards/alerts locally.
`kind` is targeting itself as being "a real kubernetes implementation"
`k3d` is targeting itself as being Rancher's kubernetes implementation so if you buy into all of the Rancher tooling it'd be a great option.
I have a script that simplifies creation of kind clusters with local registry and ingress https://github.com/MoserMichael/kind-helper (to make it more developer friendly that is) but i guess kind is more for automation testing purposes as it runs all modes in docker containers on the same machine.
Looks like a nice tool. If it could watch for changes, rebuild, repush, and restart pods it would be the perfect dev environment. Even still this looks like a great start. Have you thought about polishing it a little more and doing a Show HN?
Thanks, i think that building images and restarting pods would be a bit out of scope for this script, but you have examples in the tests for this project. For kubernetes cluster administration i have this project; https://github.com/MoserMichael/s9k
Neither k3d (k3s) nor kind support building of images since they use containerd directly, which [building] seems to be a requirement for development environment.
This means that you need to build your app using Docker, and then import it to k3d/kind environment, thus needing local registry (not enabled by default), which means that images are duplicated.
Yea, exactly, this. I see all sorts of people making build systems and tools and they say you just do "my_command build my_target && docker-compose up -d" and I can't help but wonder: are these people ONLY developing 1 service? For me, that would be multiplied by 20x and we're not even at the number of services we're expecting.
We do a lot of integration with third parties which is essentially pull down some data and write it into a microservice responsible for keeping that data. Each of these integrations is owned by a separate engineer because it takes up a lot of time to deal with integration calls, planning, etc and we need to be language flexible (some people only give you a .dll or .jar, no choices).
I'm assuming people building these tools don't use a local development workflow because: their systems are too big to run on a single machine (Googlers) or don't work on large projects (open source side projects).
The clostest thing to what you and I are looking for, though, is skaffold and tilt.dev.
Talking about kind and k3s, does anyone know how to load a local docker image into a k3d cluster?
The command is supposed to be: 'k3d image import --cluster myCluster image:tag', but it fails with "Failed to start exec process in node 'k3d-myCluster-tools'"
On kind it works fine with 'kind load docker-image --name myCluster image:tag'.
You might take a look at my script https://github.com/MoserMichael/kind-helper , sets up local regustry with kind so you can do without image import. (Comes with test case/example))
I prefer to take it one abstraction further and setup a makefile where the default `make` brings up the app. This makes it so you can rework the runtime as you see fit and nothing changes. It also doesn't require you to jump into docker immediately, and gives a good extension point for various 'helper script' workflows that you will always have.
Makefiles are nice in theory but don't scale well to team members who don't know how to use make. If your team is full of experienced C/C++ devs then this will probably work great for you. If you are working with a bunch of android/javascript/etc developers who've never seen make, and who haven't exercised the skillset of reading through 3-decade-old email threads to find answers, you'll find that the only thing you'll do is "make" them upset.
Not to pile on -- and I'm sure there would be exceptions -- but I've worked with some large and dynamic (employee churn) dev teams and it's always been worth it in my cases, so far, to burn a half day with new people to get them comfortable with Makefiles (and how to fix the common errors when editing them). Teams I've been on have used them for python, ruby, node, Java and Rust[1] codebases (across the years). It's definitely language-agnostic enough on its own. Oh! And the same Makefile is used in the CI/CD pipeline, to make the entire process more likely to be repeatable by our automation tools.
[1] Yes, our Makefiles sometimes call docker/npm/ant/maven/gradle/cargo under the hood. Its been worth it, in my experience. One syntax on all projects to set up the workspace, refresh dependencies, deploy to dev env, clean, etc.
C. S. Lewis taught me newness is no virtue, and oldness is no vice. I’ll leave the answer as to why I’m quoting a writer on religion in response to an anecdote on developers’ quirks about their environment as an exercise to the reader.
Dont these issues also apply to the docker-compose alternative? I'm recommending make as "the one tool" to use for all of your teams apps. No matter what app they work on, everything is setup with `make bootstrap` and everything is run with `make up`. Everything is deployed with `make deploy`. They dont need to understand the _how_, only the _what_.
No, other tools do not suffer from what makes make complex. Make is "hard" because of a few things:
1. The syntax is very complex
2. It's just a wrapper around shell command execution
3. There's no single standard everyone fully implements
If I implemented a makefile then I'd have to make sure it's compatible with MacOS and GNU Make. The syntax for make files and target deps are also very complex "Do I want to use % or $ or @?" Make also implements very few pieces of functionality. It forks out SHELL for most things and because of this you need to have everything your makefile is using installed on your system. If you have a makefile, invariably, someone implements some tool that's super helpful and works.... until it doesn't. It depends on some system configuration or utility that is no longer a dep of something you've manually installed and the makefile breaks. If you use containers for all of your functionality this is no longer an issue because you can vendor base layers of OSs/images and make sure they don't break. You can also test these changes, that could be breaking, without rolling out to other developers. I cannot test if my Makefile will run on a developer's mac if I don't have a mac myself and even then it's a crap shoot (do they have homebrew? is their PATH correct? etc).
> The syntax for make files and target deps are also very complex "Do I want to use % or $ or @?"
How is this any more complex than any programming language? Take javascript. Do I want to use =, ==, or ===?
> I cannot test if my Makefile will run on a developer's mac if I don't have a mac myself and even then it's a crap shoot (do they have homebrew? is their PATH correct? etc).
Containers suffer from these kinds of problems as well. For example, if your ip tables are not set up just so, you get no network access from inside your container.
Well the entire linked article and this entire thread (and all the others) could be boiled down to "Just learn [kubernetes,docker,whatever] then?" but that's not super helpful.
Yeah but if you use bare tools then it’s hard to change them and the interface may differ project to project. If you let the JS developers pick your universal build tool then they’ll want to change it annually because now the old one they loved is “terrible” and some other thing is great and is definitely the way forward. Plus then you’ll need node on every machine that needs to run the build, and probably you’ll want to lock the version and not use the one in LTS OS releases because those are too old so now all your poor non-JS developers are installing NVM and Node and a pile of other crap.
Or you can just make everyone wrap their build tools in make.
The solution here is to choose a build system that is language agnostic. The most common of these is docker & docker-compose. There's also the Blaze-likes (Pants, Buck, Bazel, Please) that let you hermetically include tools like node inside of the build chain.
Make is not a "wrapper" it's a "translucent layer on top of" since it doesn't hide the deps on the underlying tools.
When I do `docker build -t ... something` or `bazel build //something` I don't need to care if it's node, C++, Java, etc. I just need to know that it is a thing that I want to build. Make cannot deliver that to you as you need all of your system toolchains installed and you need to make sure your makefile works on every OS/system config.
> Make is not a "wrapper" it's a "translucent layer on top of" since it doesn't hide the deps on the underlying tools.
For me, that's kinda the point. If (when!) the underlying layers break, I can open up the Makefile and see exactly what was called, without too much abstraction. It's not there to provide complex build options, it's to provide aliases to the commonly used commands in a platform agnostic way.
Most of my make commands are simply aliases for one or more commands, which are often something like `docker run --rm -it project_runner_image some_command`
There's some real problems with Makefiles in the form of tabs as the prefix, running everything as a shell, weird platform specific versions, etc - but I've yet to see anything else which is as small, as universally available, with such a low barrier to entry. Most of the other options I've seen so far (and I'll be the first to admit I haven't looked very hard) seemed to either be overly complex for simple dev setups, or bound to a particular language.
Yes, you still have to make sure the underlying commands work because Make doesnt provide anything to isolate the commands. But that is also sort of my point: make is nothing but a generic task runner on top of what you are already doing. Its just about having commands and conventions that dont change depending on the underlying systems.
I agree that blaze is the better option, but all the benefits it provides comes with a greater up-front cost, where its not necessarily possible to drop-in on a project and run with. Also notable is that blaze is inspired by and is a replacement for make. So in a lot of ways, make is a poor mans blaze now.
> If you are working with a bunch of android/javascript/etc developers who've never seen make, and who haven't exercised the skillset of reading through 3-decade-old email threads to find answers, you'll find that the only thing you'll do is "make" them upset.
Actually I find most answers either in official documentation or on StackExchange sites. And I much prefer digging in mailing lists over digging in GitHub issues. You access both over web anyway.
I would love to see an example of how to set something like this up if you've got any links in mind.
We've got multiple apps deployed various ways, everything from pseudo-automated CI/CD (auto build but manual deploy) to fully manual deployment directly out of our IDE. Nothing set up within Docker containers yet except for a few proofs of concept.
This facilitates a benefit that I think has been described as "self-contained systems". There's one website on the topic, that imho is too narrowly focused and seeks to distinguish itself from microservices incorrectly, I think.
That said, a self-contained system in the context of a container orchestration framework, docker-compose, kubernetes, etc, facilitates an number of benefits, not the least of which is the ease of scaling engineering staff.
For example, in typical organizations that claim to have modern development practices, there are static dev, test, and staging environments for a given service. Obviously these are subject to config skew, etc. and must be time-shared between folks. When it's a simple task to create an environment per engineer for each purpose or any other, that can accelerate development tremendously.
There are other benefits around resilience in production, but that's another topic entirely.
Agreed with all mentioned upsides of dockerzing apps. One disadvantage though is a lot of additional work is required for APM/logs/infra monitoring.
Even in 2020, newrelic/datadog/others documentation seems to be written non-containerised setup.
Docker is slow on Mac. Especially with volumes. This is one drawback to dev'ing everything with Docker locally. If you need to coordinate lots of services across containers to establish the local environment though this is probably something you can live with compared to the alternatives.
Ask every enterprise that existed prior to 2018? Even for small and mid-size enterprises, manual deploys aren't that huge of a deal. Automation with some bash scripts can get you pretty far. Especially if you're only managing a monolith or two. I've worked in some platforms that absolutely defied automation due to their proprietary nature. It just meant that we'd need a few people to spend 1-2 hours every two weeks to run a deployment. That cost added up over even a year or two is probably less than paying for a redesign and replatform.
I've also done deploys just using managed services like ECS/Fargate or Heroku where we just build an artifact and push it to the host with a script. A stack that's a load balancer, stateless app server then a DB and/or file store can be provisioned manually once then not really worried about for a long time.
To be clear, I was more asking how people achieve the sort of fast bootstrapped dev environments the parent comment describes using k8s, especially where you’re not necessarily talking about simple topologies like a monolith plus a database.
Option 1: Everyone develops on the CI/CD platform.
Ops gives devs "disposable environments" to do their testing in. Basically, GitHub + Jenkins + Terraform + AWS. I've used this to stand up real infrastructure every time a PR is opened, and pipelines run against the infra. Code and infra match each other because they both exist in the same branch/repo (monorepo; you can of course do multi-repo, just takes more coordination). It's all destroyed as soon as the PR is closed, but you can also keep it up to do dev work against. You can also keep one copy of the latest master branch up at all times (the "dev" or "cert" or "test" infrastructure) as a shared test environment. Downside: you have to have access to the network (which is in AWS, so that's not so difficult). Upside: the dev environment always mimics production, devs don't need to do anything to stand it up.
Option 2: A bunch of mocks, and scripts to stand up stuff in Docker, using docker-compose or something else. This becomes a bit of a problem to manage with lots of devs, though, and docker-compose is not a good model for production deployments.
Tilt is pretty great for dev workflows on k8s, they have affordances for debuggers, fast restarts without a full docker build, docker push, k8s delete pod workflow.
Depends on what cloud services you use. I am very opposed to using any cloud service that I can't self host in some way or another. That's a sure fire way to end up screwed. The main cloud services I use at work are:
1. S3/Azure Blob Store
2. DocumentDB/CosmoDB/Cloud Atlas
3. Aurora/A for PG DB
4. ElastiCache/some complex name in azure
All of these data stores can be hosted locally using minio for s3, MongoDB for the mongo implementations, and postgres/mysql for the RDBMSs, redis/memcached for ElastiCache.
If there was ever a case that I needed to ever use a real - full vendor lockin - solution I would hide it behind an API of my own and configure that API to store data locally. For example, the Azure Blob Store local container is completely broken for any sensible usage and I've been on an github issue thread for ~3 months waiting for them to resolve the issue (you're forced to link your netns to that container to talk to it as the application clients only works over 127.0.0.1). So essentially I have the following settings:
Define the same scripted startup on whatever floats you boat and run it.
We used to be able clone 400 VMs in 1 hour, sanitize all customer data and refresh a full dev environment from m prod in an hour. TBs of data.
But let's all stand around and claim k8s is not your flavor of the day solution to a very old problem, but one that caries a huge amount of complexity and so much network effry that it scares the pants off most people.
You can setup wildcard DNS names or use `{STAR}.{localhost,local}` for local development. Just use a reverse proxy on `:80` that allows you to define wildcard subdomains like `container_name.{STAR}` or setup the reverse proxy to automatically take the subdomain and use that as the backend server name (should be possible in nginx).
HN uses * for formatting so it's been replaced with {STAR}
Multiple approaches I've seen in the past. My most recent approach is to rely heavily on unit tests and do 99% of your testing there. Before opening a PR/MR/CL into `master` I test my feature out end-to-end using `docker-compose up --build service_name` and make API calls or use the UI to test everything. Code coverage keeps my confident that everything works, my tests automatically run on all other engineer's machines, I can test the code in a very-close-to-prod environment, life is good.
Another approach, from a previous startup, who mainly/only used PHP was to volume mount the code you were editing into the containers that were running and to expose an xdebug port for our IDEs to connect to. This was very hacky and error prone but when it worked it worked.
Something that I plan to do in the future is to setup some incremental build system that can build all of my software and use that to build my containers. Whenever these outputs change I'll have them redeploy the container in docker-compose or minikube. This way I get very fast builds, very small containers, and much fewer hacks.
I think one of the underrated parts of using something like Kubernetes early (or even w/ simpler orchestrators like swarm or rancher), is that it encourages (and sometimes enforces) architecture best practices from the start. IE, you won't be storing state locally, you'll be able to handle servers being randomly killed, you'll already have horizontal scaling, etc. In my experience the hard part of migrating to containers in a legacy app is when they break those constraints, especially around local state and special servers. It's easier to do these things sooner rather than later, and the constraints kubernetes places on you aren't that hard to work around if you design it in from the start.
I good reason not to use kubernetes though is if you know your app is probably never going to scale, or if it's the kind of thing that can scale very well on a single machine, or if it's not primarily based on http communication. (I wouldn't write a real time game server in kubernetes, for instance, because I doubt it'd really help with a primarily UDP workload that is likely going to be attached to one server)
> I good reason not to use kubernetes though is if you know your app is probably never going to scale
Suggesting that you can only scale with Kubernetes just isn't true. Our tech stack originally ran on AWS in multi-regions. Then we moved to GCP. The architecture was the same for both:
- public facing Load Balancer
- instance groups attached to the LB
- add health checks so unresponsive instances get killed
- add scaling rules based on whatever metric you want (AWS's offering was far better than GCP's)
- choose your instance type(s).
And for the record, we handle ~15 billion HTTP requests per day (traffic is seasonal). Our busiest data centre is us-east, which has a max of 50 instances (16 vCPU, 31GB RAM), and we rarely need them all. You can also mix spot and non-spot instances too, so we know we have a least one or two machines that GCP won't kill.
Rolling out updates involves building a debian package, building a new VM image, and then clicking a 'deploy' button (GCP makes this easier than AWS).
I’m not avidly opposed to k8s by any means, but you can get these same properties from any of a variety of easier-to-use schedulers such as Fargate, Heroku, or even EC2 autoscaling groups. Of course, there are probably Kubernetes distributions that lower the threshold of using Kubernetes (and if there aren’t, there really should be) by providing solutions for logging, monitoring, certificate management, Functions (a la AWS Lambda), load balancing / ingress, state management (databases as a service), etc preconfigured out of the box (similar to what you get with AWS or Heroku).
So, I think the downside of those systems is that they tend to be specific to a provider, so you get some lock in, and not all of those things are easy/possible to run locally, whereas minikube or docker swarm can replicate a cluster locally, which I think is really important. Otherwise yeah, those systems can be used to accomplish the same goals.
One thing that I think is non-obvious: there are easy ways to set up a kubernetes cluster, it's just that there's so many options out there that the easier solutions tend to get lost in the noise. If I had to manage kubernetes nodes manually I'd probably never do it, but setting up something like AKS/EKS/GKS -- they all have their own issues, but they're pretty easy to get started with.
The problem with your approach is that you're firmly locked in to the vendor.
And in the case of Fargate, Heroku etc you're paying significantly more than if you had made use of Spot instances or shopped around for a cheaper vendor.
Vendor lock-in concerns are overblown. Unless there’s a real chance you’ll need to pick up and move, don’t worry about it. Your savings by not building/operating everything yourself will dwarf other costs (unless your business has huge scale and you have a world class internal cloud capability which you probably don’t and if you do, you can probably just negotiate a better deal from a cloud provider a la Netflix and Amazon). To that end, you only “save money” by managing everything yourself if you write off the cost of engineering time and talent, which is to say you lose money by doing it yourself because you don’t have the scale or talent to compete with Amazon even with their markup (certainly not when you account for opportunity cost).
Nobody is saying you have to build everything yourself.
But you can just use cloud providers for their hardware and not needlessly tie yourself to their software. For example you can use a managed K8s solution like EKS but then have all of your monitoring, logging, databases etc all be self-hosted.
And it's not just about cost but also about being able to take advantage of other cloud provider's unique strengths or being truly resilient to outages.
The same arguments apply to hardware and software. Cloud providers’ core competency is cloud infrastructure and services and they have the scale to economize their offerings. Your business very likely can’t compete with them, so to the extent that you’re owning things that cloud providers could sell you, you’re throwing away money and that figure very likely dwarfs the risk adjusted cost of maybe having to migrate to another provider one day. (Of course, there are services that are overpriced here and there, but the general principle holds).
> being able to take advantage of other cloud provider's unique strengths
If you’re limiting yourself to using only easily portable services, you’re not going to be taking advantage of any cloud provider’s unique strengths anyway. So you might as well just pick one.
A nickel an hour sounds fine/miniscule, but when you realize thats 30$/month vs an alternative VPS of $5/month then I think the cost premium is pretty clear.
> you can get these same properties from any of a variety of easier-to-use schedulers such as Fargate, Heroku, or even EC2 autoscaling group
Serious question, as I don't have any experience of those, can you scale based on a custom metric? It's probably the one feature of k8s I appreciate the most.
On Heroku, I have written a very small autoscaling script that is called once every few minutes by Heroku Scheduler (like a cronjob). The autoscaling script pulls the custom metric, does some arithmetic, and calls the Heroku API to scale the dyno count up/down as desired. (That script was probably shorter and more readable than this comment.)
Multicast is definitely lacking (well, it’s completely absent if you’re using Calico), but even for regular UDP there’s a big problem if you care about latency (and if you don’t, why are you using UDP?) Anecdotally, I’ve heard people complain that UDP dropout on their clusters is just too high to actually support the kind of things they want to do. Equally, UDP based products like Tibco FTL just literally aren’t supported on Kubernetes environments.
None of this means I have any answers, I’m afraid.
It all started with Ruby. Ruby's syntactic sugar inspired the "syntactic sugar" of tooling, primarily Bundler and Rspec. Tooling, for what felt like the first time, became a first class citizen. Ruby's tooling made Heroku possible: ie, reproducible builds across; dev, testing, staging and production environments. Heroku's success was based on the primitives of the Twelve-Factor App[1]. The 12 factors (and therefore Heroku) were fundamentally designed around the already-old lightweight virtualisation technology of LXC. The success of Heroku paved the way for Docker. The success of Docker created the world in which Kubernetes makes sense.
To be blunt: if you don't understand the relevance of Kubernetes, or whether it's relevant to you, you don't understand the benefits of the 12 factors in their broadest sense. The 12 factors are much, much more than just "How To Deploy On Heroku".
Copypasting the 12 factors:
I. Codebase
One codebase tracked in revision control, many deploys
II. Dependencies
Explicitly declare and isolate dependencies
III. Config
Store config in the environment
IV. Backing services
Treat backing services as attached resources
V. Build, release, run
Strictly separate build and run stages
VI. Processes
Execute the app as one or more stateless processes
VII. Port binding
Export services via port binding
VIII. Concurrency
Scale out via the process model
IX. Disposability
Maximize robustness with fast startup and graceful shutdown
X. Dev/prod parity
Keep development, staging, and production as similar as possible
XI. Logs
Treat logs as event streams
XII. Admin processes
Run admin/management tasks as one-off processes
How do you pass hierarchical config? In JSON, YAML, TOML, etc., it's easy to group env vars, but how do you do that with env vars? LOGGING__HANDLER__FORMAT, LOGGING__HANDLER__ARGS__ARG1, LOGGING__HANDLER__ARGS__ARG2 ? If so, that looks positively awful. If the solution is passing a YAML or JSON string in the env variable, that sounds even worse than having a config file that the app reads.
>> it’s easy to mistakenly check in a config file to the repo
Add it to the ignore file. Someone would really have to force commit it to get into version control. Plus, even if it's stored in env vars, you're going to commit the values somewhere be it your Ansible secrets, SaltStack yml, Chef or Puppet repo, whatever.
To be fair, that's the only point I contend with. The rest is very reasonable.
Goodness gracious. That's dreadful and exactly what I feared the effect of that rule would be were it followed strictly. It looks like a hacky workaround to an unnecessary problem.
I’ve always thought a reasonable pragmatic solution is to force configuration files to be loaded based on an optional env variable. Kubernetes can easily mount a complex configuration file via a volume or configmap (configmapgenerator is awesome). As long as the path is configurable and the env var is documented - it’s not so bad.
This ignores the reality that kubernetes came from google and was based on existing (long lived) tooling. Docker is "some shell scripts around namespaces and cgroups" (also long lived, existing tooling).
The fact that there is public interest in these things _maybe_ you can attribute in some way to ruby but not the existence of them IMO.
I was a huge proponent of not hosting monolithic applications on kubernetes, that was until the company I worked for acquired another same size company and I had to learn their hand rolled puppet2/shell script based infrastructure management/deployment logic all over again(we had our own hand rolled puppet3/shell script based infrastructure logic too, so that's two stacks with their own quirks.)
Now I'm completely in favor of hosting anything you can cram into kubernetes in kubernetes, even though kubernetes is more complex than most other infra tools, most of the time there is only one way to do things(configmaps for config, PV allocations for storage etc..) So if you understand kubernetes, it's easier to get the larger picture about the infrastructure even if you know nothing about the application stack.
I think what isn't fully appreciated about k8s yet, but we will look back on is how it creates an open standard platform for apps to be deployed. It is one thing to port your own apps to run within a k8s cluster, it is another to have and operate a k8s cluster that you can use to deploy services built by others. I hope we see more of this soon
Not a single mention of elastic beanstalk or App Engine? The best middle ground for small teams who just want one reliable website with minimal scaling (and who can't just choose a nom-aws service).
We ran on AppEngine from launch to exit, great experience. Never had to worry about configurations, environments, maintenance, reliability, scalability etc. We just focused on building our product.
Cloud foundry offers their old(er) architecture as well as 2 ways to get the “cf push” experience over Kubernetes. Check out KubeCF and cf-for-k8s. Both of these are open source projects that you can deploy to a Kubernetes infrastructure of choice.
I only have one tiny detail: Bursty traffic means that your cluster needs to be able to deal with the peaks. If you running an on prem Kubernetes cluster, then there is no savings, unless you can use the capacity for something else during non peak periods.
The scaling, and potential savings is a cloud feature, not a feature of Kubernetes.
It also enables you to achieve app density. Many of the companies I've been working with lately have large batch processes on nightly / weekly / monthly basis. For some reason, each job previously was setup on on-prem hardware dedicated per workload. Using Kubernetes and scheduling jobs to manage capacity has enabled us to reduce the number of on-prem servers substantially as part of the migration plan for the cloud.
In my experience hosting services in VMs on-prem I was able to achieve roughly 30% efficiency across 3k instances and hundreds of nodes. In my experience hosting microservices on k8s, I was able to achieve 80% efficiency across hundreds of nodes.
Both were the result of a great deal of work to optimize efficiency. In this case, I use the word "efficiency" to refer to a blend of CPU and memory utilization.
I would nitpick that and say "if you are running a cluster with a static number of worker nodes".
If I would have a cloud hosted k8s cluster and don't configure any auto scaling, the situation would be equivalent to me buying x number of physical machines and hosting them as worker nodes on prem. No ability to scale to "bursty traffic".
If the burst is for all apps at once, then yes. But if you need to handle temporary bursts of a single app among manh, then hosting them in a heterogenous cluster will give you benefit.
My assessment is that if you have a large, complex micro service environment with several applications, docker may be a good route to take as an interim to cloud native.
If you have a relatively simple deployment structure (one big application with 10-20 services), maybe you don't need to add the docker skill to what you're already doing.
If you're in the process of rebuilding your applications and you've decided cloud native is the way to go, then containers are pointless.
I'd argue if you don't have the skills and you don't want to pay someone to do this work full-time, you might put your developers in a position to fail.
Based on my 30+ years of building software, I see containers as a dead end. It may help you out in the short term, prove to your CTO that you're using "well-known" technology, but in the end, cloud native is going to replace everything. And before you say "but we can't be in the cloud," then you should know that cloud-native development like Lambdas can be done on-prem as well.
I'm positive this post will get negative votes. That's fine. I like tilting at windmills.
It depends on the app. If it's an existing app, a container is fine. If you're building a new app and you don't care about the infrastructure, Serverless is better.
My problem is that eventually all apps will be rewritten...and it's probable they will be in some kind of Serverless manner.
Containers still have some separation of concerns issues as well as maintaining state or sharing data.
My default would be Serverless with a plan B for Containers and a plan C for VM's.
> To make a cluster useful for the average workload a menagerie of add-ons will be required. Some of them almost everyone uses, others are somewhat niche.
This is the concern I have with k8s. All this complexity introduces operational and security concerns, while adding more work to do before you can just deploy business value (compared to launching on standard auto-scaling cloud instances)
If you are using a managed kubernetes cluster from a cloud provider you mostly don't need to worry about these sorts of things. If you're not, and deploying to bare metal, the main things you need to worry about are: load balancers, storage & monitoring. If you're large enough that you can effectively run kube on bare metal you probably have enterprise solutions for load balacing [0], storage [1] & monitoring your applications that you've already validated as being secure/stable.
If you want to go all out you can also grab an operator to manage rolling out databases for you (postgres [2], mongo, etc).
A lot of the complexity people bump into with kube is really poorly planned out tools like Istio that have way too many features, a very overly complex mode of operation (out of the box it breaks CronJobs!!!), and very sub-standard community documentation. If you avoid Istio, or anything that injects sidecars and initcontainers, you'll find the experience enjoyable.
It's the classic trade-off of cost vs benefit. In places I've worked, the benefit has been worth it. The kind of add-ons mentioned in the article are in keeping with the decision for the orchestrator (which is already complex) not trying to do absolutely everything. I feel that is a good thing.
It's kind of like an API gateway with traditional microservice instances. If you have DNS and load balancers pushing requests directly to your services, you might wonder why you would ever need such a thing. Until you do.
As a rule of thumb, the quality of engineering of the core Kubernetes distribution is rock solid and incredible, but anything that's not is in varying stages of maturity.
How much Kubernetes-adjacent code you actually need to adopt (and therefore, how much risk you take on) depends from project to project and organization to organization.
And with serverless you get significant lock-in, are overpaying for resources, have less ability to debug when things go wrong, have basically zero flexibility and it's very difficult to have a local setup mirror your production one.
It's fine for certain use cases but you get the benefits of both worlds by just using a managed K8s service like AWS EKS.
1) Are you on AWS? Then you don't need Kubernetes. Use Fargate.
2) Are you on Google Cloud? Then you don't need Kubernetes. Use Cloud Run.
3) Are you on Azure? Then you don't need Kubernetes. Use Azure Container Instances.
4) Are you on a PaaS like Heroku? Then you don't need Kubernetes.
5) Are you on a random VPC provider / bare metal machines? You could probably still do without Kubernetes using Docker Swarm (apparently it's not dead!), Nomad, Mesos DC/OS, or a standard Linux box and systemd (or some other process or cluster scheduler).
6) Do you need to solve the bin-packing problem? Do you need to self-host a service mesh of microservices in multiple colocated regions? Do you need a fully automated redundant fault-tolerant network of disposable nodes to constantly reschedule different versions of applications with stringent RBACs, scheduled tasks, dynamic resource allocation, and do you have about a million dollars to spend on building and maintaining it all? Then you need Kubernetes.
> do you have about a million dollars to spend on building and maintaining it all? Then you need Kubernetes.
I think you're overstating the investment necessary to overcome the initial complication of Kubernetes and also understating the benefit of being on a platform with a massive and thriving community behind it.
As an example, in a prior role, there were a set of data engineers that would receive data in the form of MS SQL server backups from which these engineers would need to query and transform data on an exploratory rather than production basis. Certainly one could use an "undifferentiated" service from a cloud provider, but it was also a roughly 5-minute process for me to use the rather high quality helm chart and docker image commonly available to stand up a new service for the benefit of each engineer that had the need.
The process of creating that automation necessary to deploy the helm chart and restore the backups took approximately one hour and could be repeated ad nauseum in the aforementioned 5-minute time period.
There are many, many other examples of this. Want a data-science platform complete out of the box with no vendor lock-in, how about data8.org? The list goes on.
What happens when your car's A/C stops working? Most people think, I'll just get a little can of R134a, fill up the system, and it'll be good as new. Somebody said they did that once and it worked just fine, so it should work for you too, right? I mean, it says so right on the can, and there's YouTube videos of it and everything.
The trouble is, A/C is a complex system. There are moving pieces with specialized oils that oxidize and break down over time. There's a sealed system of pressurized gas. There's a pump, clutch, coils, fans, filter, thermostat, drain, belt, and electronics, Any of those parts could fail in a number of ways. Just to inspect it you need a custom gauge set, a tank of R134a, and a vacuum pump.
Etcd is about as complex as an A/C system. That is one of a dozen components of a Kubernetes system, before we get into custom integrations, which you will need about another dozen of.
The million dollars is to pay for everything needed to set up and maintain all of that, create the custom integrations that do not come turn-key from the community, create the custom integrations the community doesn't even have, integrate it with your development and deployment systems, business requirements, application-specific needs, and so on.
A million is an average. You can get away with less, just like you can get away with pumping a pre-pressurized A/C system with extra coolant: if you're lucky it won't break. When it does, I hope you have either a lot of time, or a lot of money to pay a consultant.
My AC stopped working. In my spare time, I learned a new trade and replaced all my ductwork, the air handler, and the compressor unit with a new modern VRF system and am presently enjoying my air-conditioned home in this bay area heat wave.
Point being, you're asking the wrong guy.
Edit: I see you wrote "car's AC". My answer is pretty much the same.
It's a distributed decentralized database using self-signed certs. Just by itself it requires maintenance: upgrading the software, upgrading the host it runs on, rotating keys, networking, access control, key space maintenance, backup, etc. Here are the docs you need to know to run it: https://etcd.io/docs/v3.4.0/op-guide/ And there's another dozen docs not written there that the admin just sort of finds out over time.
But it's part of other systems too, making the overall thing a system of systems. Interactions between systems of systems are complex and cause unexpected behavior. At some point you will run into an error in K8s that you can't resolve that will require you to debug Etcd. And "Bob" help you if the database gets corrupt or overwritten, or incompatible versions of software screw up what's in the database, etc. (My original analogy was inaccurate... Etcd is more like the engine than just A/C, because if it stops working, everything stops working)
Do you know what happens if an Etcd certificate's SAN field does not include domain names but only IP addresses? The client requests HelloInfo with an empty ServerName so it doesn't trigger a TLS reload on handshake, making it more difficult to replace expired certs. That is a single random quirk in a single component of this software which underpins all of Kubernetes. I cannot sit here and explain every single reason why Etcd is complex; it must suffice to say that the software just is complex, and that this reality means that while it may sometimes be simple for some people to operate, it will definitely not always be simple to operate, and there will come a time that the true cost will emerge.
Now, most people don't need to pay for that high cost of complexity. They can use a SaaS/PaaS product like AWS ECS/Fargate or others, where somebody at some other company is dealing with the cost of complexity for you. All you have to do at that point is run some API calls and everything just works. Not only is it easier, it's immensely cheaper, less time-consuming, and more reliable.
...But you might not even need ECS! There's a lot of work just to get a simple PoC up on Fargate with an NLB, ACM cert, RDS instance, security groups, VPCs, cluster, service, task, etc. Compare that to just spinning up a micro instance and running MariaDB and a Python on it, and the latter you can have done in 20 minutes. If you can avoid complexity and still meet your SLOs, do that.
OK, saying that Azure CI is a substitute to Kubernetes is very misleading. Sorry if this is misrepresenting what you're saying, but basically I see this as saying you don't need k8s, just use Docker without k8s.
> I know for fargate there are some vertical scaling limitations that don't exist with kubernetes
To overcome those limitations, you can use EC2 Container Instances with ECS. And instance auto scaling can now be automated for you via ECS Capacity Providers.
You may want to qualify these absolutes, as in "you may not need k8s, consider blah". For ex I run on k8s and can't use Fargate, same for GCP and Azure.
I have devops engineers who need something to do otherwise they'll quit and move on to other companies that are working with kubernetes.
I think ^^^ the above is the main thing driving adoption. It's a real social engineering problem. Many engineers like to work with the latest and greatest.
To me the question is more: Is there an extremely valid reason to _not_ use K8s.
As a freelancer I visit quite a number of enterprise companies, think: Banks, Insurance, Airports, and they are all making the switch or are full-on invested in living in K8s by default. If it does not run in or was not made with K8s in mind: It will not be used/bought.
Another thing I'm noticing with smaller companies: If you start fresh, you choose k8s. Which mens all other stuff is already slowly dying by virtue of not beeing chosen.
Developers want/expect it, sysadmins see the benefits from day one, and companies see the potential gains of using less cloud resources and a platform that could potentially run in multiple clouds for the first time ever.
K8s, openebs, prometheus/grafana, loki, kustomize, github actions. This is truly where "it's at" at the moment.
I work for a startup whose product is small (half a dozen servers, if relatively beefy ones) clusters that will be run on-prem by customers, at least sometimes in a low-to-no-touch capacity. Most of our application components are micro-ish services that are run on all hosts in the cluster for either extra capacity or fault tolerance.
We currently run everything on mesos/marathon, but are looking to switch away from it. K8s is kinda the “default” option, and is potentially appealing to some potential acquirers and investors.
But I never really see k8s being talked about in that context of “physical hardware that’s on prem, but not on MY prem.” Is there a reason for that? If we go with k8s is it going to bite us? Does anyone have experience with something like that they could share?
> But I never really see k8s being talked about in that context of “physical hardware that’s on prem, but not on MY prem.” Is there a reason for that? If we go with k8s is it going to bite us? Does anyone have experience with something like that they could share?
Kubernetes provides a leaky abstraction above the underlying hardware - the storage and networking are going to be different depending on who is maintaining the Kubernetes cluster. Kubernetes's strength is that it acknowledges the leakiness of the abstraction and makes it explicit. If your customer uses a specific networking and storage provider, Kubernetes makes it easier for you to say (or not) that you have certified your product for those networking and storage providers, and here's what the manifests look like, because there's a standard way of configuring the application to work with that networking (CNI, which powers the standard Service as well as maybe NetworkPolicies) and storage (CSI, specifically StorageClass) provider.
If you just provide Docker images, or VM appliances, then Murphy promises you that you're going to get frustrated support calls from customers saying "your application is slow and we don't understand why." Good luck then.
I have some experience with this. Way I see it, if you have your software on somebody else's on-prem and move to k8s, more importantly than replacing the stack between Linux and your app is a change in mentality and demarcation of responsibilities, as in now your app "runs on k8s" and your clients are responsible for that layer (or they can contract that out) and you are responsible only for your app.
It helps abstract out everything in the stack below your app and easier conceptually on everybody; now the clients can train in k8s and use same set of tooling like prometheus/grafana etc that usually go with k8s, same or similar RBAC access etc.
OTOH realistic expectations need to be set; not because it's k8s there's going to be no problems or the learning/adaptation won't come without some pain. I suggest writing down some standard procedures for your clients like upgrades etc, pick same set of tooling for all of them (like same dashboard, same logging/monitoring/alerting etc) as a way to homeganize ("standardize") all of them.
There's a great talk from Chick-Fil-A about running kube clusters on bare metal [0]. I'm also going to be taking up a similar problem soon-ish and I'm also looking into Container Linux/derivations for doing a lot of the bare metal, updating, and rollbacks they talk about here. If anyone here has worked on this project, or similar, it would be awesome to get in touch!
One reason for example can be providing better reliability to the customers who were using certain application on premises and need/want to continue doing so. I've observed how our solution switched from monolithic blob of cpp code which crashed everything upon major failures; to the several modules - now your monitoring may crash and restart but hopefully service won't be interrupted, but when performance part crashed - all service stopped, and maintenance time was long; to the k8s - when all parts are split into separate containers and performance part is split into smaller chunks completely redundant, so when one crashes it is a) restarted without affecting 99% of other users, b) it is restarted much quicker, ten times quicker, meaning less down time to 1% affected users.
But k8s introduced non trivial amount of complexity and its own bugs and maintenance cost, meaning new separate engineers to maintain and develop just k8s tooling. And we had to rewrite a lot of legacy code. But the trade off is much better for a big project. Cutting downtimes by an order of magnitude and being able to boast it to the board - apparently priceless :) .
Really provisioning a VM manually is equivalent to provisioning a physical server. Just commented on this above.
Try using eg. Ubuntu and some kind of centralised management tool, like Salt, and install k8s. For better control, use Flux for storing your k8s configuration (deployments, configs, etc) in Git. I believe it would be good for your sanity.
Else your k8s objects will be susceptible to someone doing a klutz and "whoops" your applications are gone, real gone...
I did an on-prem k8s deployment at my last place. It is definitely challenging compared to EKS and GKE, but the difficultly is not in base k8s.
Following the kubeadm getting started guide on the kubernetes.io site can get you an 'ha', 'production ready' going in a couple hours. Most of it is pretty mechanical, and only needs a couple key decisions, mainly your networking plugin. Generally the most popular ones have instructions as part of the getting started guide, making the process straight forward.
Where it quickly becomes difficult is after this step. You have a cluster ready to serve workloads, but it has no storage, no ingress/external load balancer.
Storage can be as simple as NFS volumes (you don't even need a provider for this, but you should use one anyway). Rook/Ceph will work, but now you've just taken on two complex technologies instead of one.
Without an external load balancer of some sort, you will have trouble getting traffic into your cluster, and it likely won't be actually HA. You can use MetalLB for this, or appliances. If you're just starting out though, you can totally get away with setting up CNAME aliases in DNS to your nodes in a round robin type fashion. It won't be HA, but it will work, and is simple and straight forward.
Ingress is pretty easy to setup for the most part. Usually just applying an available manifest with a tweak or two. If you go the CNAME route, you will need an ingress setup so you can serve http/https on standard ports without too many issues.
If you do all these things, then you have a real deal cluster. Things like ingresses are recommended even if you're running in the cloud, so you may find that you're not all that far off from what you might find there.
Overall, the biggest trouble is all the choices you need to make. If you're starting out, maybe read up on two or three of the most popular choices for each step, and then just pick one. Anything that exists entirely within the cluster can usually be expressed purely as source controlled manifests, and kubeadm deployments can be simple shell scripts if you don't make them do everything (i.e. only support one container driver, not all of them).
One major caveat; If you screw up your network layer, you basically have to start over. This isn't strictly true, but it's the one where you are often better off starting over when you need to make fundamental changes to your network setup (like podCIDR and serviceCIDR or your network plugin). Pretty much everything else can be made to work with multiple setups at once, or you just need to delete and redeploy that component.
I would say if you have the question "Do I need Kubernetes?", you don't need it because the benefits are not immediately crystal clear to you.
Also, the author starts the evaluation from the wrong point of view, because the decision should be not so technical, but like everything else should be based on BUSINESS REQUIREMENTS. Evaluating Kubernetes is no different:
- Do you have such high traffic that you need a distributed system?
- Will a unified framework solve all your distribution problems?
- Do you really need high availability?
- Can you swallow the cost of high availability?
- Can you handle the insane complexity of Kubernetes at a reasonable cost?
You should not start asking questions about "Pods, Ingress" or anything Kubernetes specific, those are just implementation details.
Yes if you are on AWS. Significantly speeds up the engineering workflow.
Not sure if it is a vendor lock-in. If a customer wanted to move from ECS to kubernetes, they will need to migrate the manifests, roles/rolebinding etc. That is a small effort as long as they do not have to rewrite the code.
The article outlines a very nuanced detailing of how to answer that question, but I have a more blunt first consideration: If you need kubernetes, you don't need to ask whether you need it.
Basically, I think that a team/product knows when the time has come in which the infrastructure has grown in complexity so much for it to need something like kubernetes to orchestrate it. If there are doubts, then whatever current setup is in place* is probably still enough and kubernetes is beyond what the team requires.
I am very proud of the one time I managed to convince both my then tech lead and project manager, in one of my past jobs, to move away from kubernetes into a simpler architecture leveraging docker, compose and PaaS.
* Hopefully one using docker and compose or similar, as mentioned in the article.
Can you elaborate on why you think that lxd is easier to use than Docker? I've always found docker build && docker push to be a simple interface to understand. What makes "start[ing] with lxd" more approachable or easier than creating a Dockerfile, given the abundance of Docker-101 tutorials, advice, and expertise available?
I've sprinkled some comments in this thread, but as someone who is working more or less full time with k8s infrastructure, architecture and maintenance (and I do love k8s!), my take is that if you have to ask this question then the answer is invariably: NO.
I need Kubernetes since we're outgrowing Docker Swarm. Docker Swarm has a lot of issues we deal with on a constant basis so it's becoming quite painful.
Can anyone suggest a good migration guide from Docker Swarm -> Kubernetes?
My gentle suggestion is to avoid phrases like "a lot of issues" and instead to list some of the top ones. This could give some people a chance to share how they have overcome some of those specific challenges.
At a recent employer, we moved cold turkey from Swarm to Kubernetes (K8s in the rest of this reply). But we did this one microservice at a time. We didn't want our resulting K8s solution to be compromised by a misguided attempt to foist Swarm concept into the K8s way. Probably the biggest decision to make is whether to manage the cluster from scratch (not recommended), by using kops to deploy on a cloud platform, or use a cloud-native solution like EKS on AWS. After that -- here's a good guide to help with the differences in configuring the services[1]
I get it, but it's not really doing the author justice, who's trying to give a quite nuanced overview of where kubernetes is a good fit, and where it isn't.
As somebody who's pretty skeptical about onboarding the big lump of complexity that is k8s, I really appreciated the information in the article.
You see, any pile of junk(Kubernetes being one of them) can have a use case if you searched hard enough. Do not learn/work with Kubernetes unless your livelihood depends on it.
I find ECS/Fargate and managed Kubernetes not a bit easier to manage than Kubernetes itself. On the other hand, the inflexibility or vendor-specificity of these requires as much learning as you'd use FOSS tools. Also those skills are not transferable to an other cloud vendor, which might happened to be required if your primary cloud provider does not cover markets you supposed to operate in.
Using Docker-compose for local dev works very well for our teams, and we avoid the overhead of running and learning K8s operationally.
That said this approach may change for us if we were large Health or insurance company scale, but we're not.
I think this is the key take away for many startups. Get it so you:
Once you have that in a docker-compose.yml file migrating to something like kube when you need health checks, autoscaling, etc is easy.The only thing you must be critical of is bashing people over the head to make everything you run in your company run in this environment you've created. No special deployments. Establish 1 method with 1 tool and go from there.
Every company I've worked at I've brought a single command `docker-compose up -d` workflow and people were amazed by the productivity gains. No more remembering ports (container_name.local.company.com), no more chasing down configuration files and examples, no more talking to shared DBs, etc.