preloader

Docker Compose vs. Kubernetes for Local Development

Getting your local dev environment right is a long-winded process – especially when parity comes at the cost of performance. Let’s compare how Docker Compose and Kubernetes stack up for local development.

/images/blog/cover-images/docker-vs-k8s.png
Docker Compose vs Kubernetes locally

by on

When choosing between using Kubernetes and Docker Compose for local dev, you’re essentially making a tradeoff between performance and architecture. Let’s break it down and see which tool makes the most sense for your local development environment.

Docker Compose for local development

Using Docker Compose for your local dev environment is pretty trivial. There are really only two prerequisites:

  1. Your application must be containerized (eg. a Dockerfile for every service)
  2. You’ll define your multi-container app in a Compose file (this will include any necessary options for the container(s) in your app to work together, any volumes, etc.)

With Docker and Docker Compose, you can containerize any services you’re actively developing, and build/rebuild them as Docker images upon code changes. You can also easily pull existing Docker images from a registry to your machine, either to mock services locally or limit the number of mutable services you’re working on at a time.

When using Compose for local dev, since you’re working with the same (or similar) Docker container(s) that you’ll see at every stage of your development life cycle, you’ll get similar behavior locally to what you’ll get in any cloud environment, minus the advanced orchestration.

Kubernetes for local development

Chances are, you’re using Kubernetes (or a similar orchestration tool) in all of your cloud-based environments and in production. You might want to get similar performance/results by shifting left and using k8s in your local dev environment.

When you’re using Kubernetes locally, you’ll generally be running a lighter-weight version of what you’d be seeing in a pre-production environment. This is because your machine is unfortunately (in most cases) limited in compute, versus your cloud or on-prem host option. You also won’t need as many config options for a dev cluster, making your spin ups/downs much quicker.

Using Kubernetes locally is, well, controversial. We’ll get into that a bit later (and at length in a future post).

Comparing Docker Compose and Kubernetes for local development

There are a few key categories to consider when you’re choosing one over the other for local dev. Obviously, there are tradeoffs, but it’s important to consider what’s important to your use case.

Resource constraints

Even with a lightweight Kubernetes cluster, Docker Compose is really the clear choice when it comes to local performance. And anyone who has ever run even a small Docker application knows that Docker apps aren’t exactly trivial to run — you’ll still need a decently powerful machine.

Kubernetes is a different story. For the average developer’s computer, local Kubernetes consumes a lot of compute and makes it hard to run anything other than your application. Of course, more powerful machines will increase its workability, but at the end of the day, you have to ask yourself if the results are really worth it.

Orchestration model

With Kubernetes, you can run multi-node clusters, whereas Docker Compose is limited to single-node environments. Having a cluster closer to production means you can see how your application will perform much earlier on, especially when it comes to self-healing, load balancing, and auto-scaling.

With Docker Compose, you are able to scale individual containers, however this is done manually and can’t exactly supplement k8’s functionality.

Configuration

Docker Compose is extremely widely-used, and one of the (countless) reasons is because it’s simple to use but has extensive configuration options for power users. With most small to medium sized apps, you can containerize them and define them with Compose pretty quickly, resulting in a full-fledged running application. It’s generally pretty simple to figure out why a Docker Compose app isn’t building/running correctly, and there are only a limited number of configurations you’ll need to change.

Kubernetes has a bit more of a learning curve. Most users need at least a high-level understanding of core concepts before they can write a Kubernetes YAML (let alone the multiple YAMLs required to spin up a cluster). Past the YAMLs themselves, you’ll need to apply command line flags and options to orchestrate the resources together.

The ideal environment workflow with Docker Compose and Kubernetes

If you’re developing a cloud native application, you’ll (ideally) have an environment workflow with various types of environments for certain “checkpoints” in the release process. This enables you to test more often, get more stakeholder feedback, and actually keep your dev loop continuous. Here is one way you can deal with these different environment types:

Local development environment

This is where Docker Compose really shines. You’ll have a Compose file to build/manage the services that you’re actively iterating on, and you’ll pull (real or mock) images from your container registry for any remaining services. Most of your inner loop iteration will happen here.

Cloud development environment

As of recently, cloud dev environments are starting to gain traction. While many orgs skip this stage, it can be helpful to have a lightweight environment with decent compute for developing features collaboratively and iterating with more complex integrations. Think GitHub Codespaces, Daytona, or Gitpod. For cloud dev environments, you’ll probably want a single-tenant k3s cluster. Using vCluster for quick provisioning is also a good option here.

Ephemeral PR environment

This will be more production-like than your cloud development environment(s). It will likely run on k3s and include real services, integrations, and (sanitized) data. This environment should quickly build and spin up when someone opens a PR/MR, and it should have the capacity to run the same E2E test suite that you run against staging and production.

You might want to check out a solution like Shipyard for this — Shipyard takes care of the environment provisioning and lifecycle automation.

Main staging environment

Many teams benefit from a static staging environment for any infrastructure that’s difficult to ephemeralize. This will be used for any final testing once all branches are merged to main. This is the real deal, so you’ll probably be using Kubernetes instead of a lighter-weight variant.

Tools for using Docker and Kubernetes for local development

When it comes to development, you’re probably using tools that extend, simplify, and/or optimize Docker and k8s, instead of just running them at base. Here are some tools we use to make development easier:

Docker Compose

This almost goes without saying. A Docker Compose file should be the first thing you write after Dockerizing your application. Why? It’s the quickest and easiest way to get your containers to interface with each other. You’ll also preserve any options and configurations so your Dockerized application will spin up the same way, every time, on every machine.

Not sure how to get started? Docker recently rolled out the docker init command, which can look at your stack and do all the guesswork for writing a Compose file.

K3s

When you’re working in any non-production environment, you can get by with a scaled down version of your production infrastructure. Kubernetes (k8s) is powerful, but probably overkill for the vast majority of development tasks and testing. Working with a more lightweight cluster, like k3s, you can get a close approximation of the infrastructure while only needing a fraction of the compute and complexity — with the benefit of more speed.

Read how to set up and spin up a local k3s cluster.

K0s

If you’re looking for something that “just works”, check out k0s. Developed by Team Lens, k0s is easy to run anywhere – bare metal, on-prem, locally, and on any cloud provider. It doesn’t have any dependencies and is distributed in a single binary. With k0s, you don’t have to worry about config (as with most k8s options), and can get a cluster spun up within minutes, all important features for local dev.

Read the k0s docs to get started.

Kind (Kubernetes in Docker)

Kind is becoming an increasingly popular way to run a local Kubernetes cluster. It uses Docker containers as nodes. With kind, it’s simple to pull and load Docker images right into your cluster. It was actually made to test Kubernetes, but has been adopted for other use cases, particularly CI and local dev.

To get started, all you’ll need to do is open Docker and run kind cluster create.

The verdict?

At the end of the day, Docker Compose and Kubernetes are at opposite ends of the spectrum when it comes to ease-of-use. Getting your local development environment right with Kubernetes is nontrivial, and will take some fine-tuning but ultimately can be a good approximation of what you’re using later on in the pipeline. Docker Compose is easy to get running right out of the box, but there will be some parity issues between your dev environment and staging (and of course, production). For most dev teams, Docker Compose is the right amount of orchestration you’ll need locally.

Try Shipyard today

Get isolated, full-stack ephemeral environments on every PR.

Share:

What is Shipyard?

Shipyard is the Ephemeral Environment Self-Service Platform.

Automated review environments on every pull request for Developers, Product, and QA teams.

Stay connected

Latest Articles

Shipyard Newsletter
Stay in the (inner) loop

Hear about the latest and greatest in cloud native, container orchestration, DevOps, and more when you sign up for our monthly newsletter.