Multiple Dockerfiles for different build versions can be a hassle to maintain. Plus, having Dockerfiles that vary too much from each other is a poor practice: you’ll decrease dev/prod parity, leading to unpredictable behavior as code changes get deployed to different environments. Here are a few different ways you can approach Dockerfiles for your app, and how you can make your codebase more environment-agnostic.
Why Dev/Prod Parity is Important
Dev/prod parity is a software best practice that was made popular by the 12 Factor App. This principle recommends that teams aim to keep the gaps between different environments as small as possible. When envs are kept consistent with each other, you have fewer surprises and can better approximate how a feature will respond to production after testing it in staging. Dev/prod parity also leads to more maintainable software, as you are dealing with fewer config files.
You also won’t run into environment drift as frequently (which can be exhausting to remediate). In practice, implementing dev/prod parity is difficult because your hardware/infra for local dev is quite different from your production server, so these are often built differently because they need to be.
That’s where Docker comes in.
One of the main benefits of Docker is being able to build your image in one environment and be confident it works the same way in another.
For example, if you build your image in development, push it to CI and then auto-deploy that to production, it’s nice to know that what you tested locally in development is tested in a comparable way in CI and then ultimately pulled down in production with no surprises.
The last thing you want is having your tests pass in development but fail in CI, or even worse, something breaks in production in an unexpected way.
Development vs CI vs production image difference tolerance
It’s an interesting thing to think about because there’s a few ways for your images to end up being different across environments. This all comes down to your personal preference on what you’re optimizing for and there’s trade-offs to consider.
Ways for your images to be built across environments:
- Identical with no or limited exceptions
- Use the same Dockerfile but tweak a few behaviors with build args
- Completely different Dockerfiles
In practice I find the middle option to be a nice balance. Let’s go over a couple of behaviors:
Using the same Dockerfile with tweaks
Maybe your web framework of choice has the idea of digesting / pre-compiling assets (Rails, Django, Flask, etc.) but you only want this step to happen in non-development environments such as production, staging or ephemeral preview environments.
You could choose to introduce a build argument like RAILS_ENV
which defaults to production
so if you build your image normally it will get pre-compiled assets but in development you may have an .env
file which sets that to development
which is controlled by Docker Compose so you have a drop-in solution that lets you build images with both values.
Then in your Dockerfile
you can have a RUN
instruction with an if statement like this:
ARG RAILS_ENV="production"
RUN if [ "${RAILS_ENV}" != "development" ]; then \
SECRET_KEY_BASE_DUMMY=1 rails assets:precompile; fi
This is what Nick Janetakis does in his Rails, Django and Flask example Docker starter apps.
This feels like an ok compromise because in the end we want our assets included in our Docker image in production, but we don’t want them to be pre-compiled in development.
Keep in mind we’re talking about build time here. You can build your image in production mode so your assets get pre-compiled but still run your test suite with RAILS_ENV=test
or whatever your framework supports. This lets you get the best of both worlds!
Another example of behavior changes could be installing package dependencies for specific environments. For example with Ruby, Bundler lets you configure BUNDLE_WITHOUT="development"
to skip installing dev dependencies. Likewise with Python, uv lets you pass in --no-dev
to skip dev dependencies. Yarn and npm have similar features.
This requires you to organize your dependencies and be very diligent about it but the upside is you could reduce the size of your Docker image and its attack surface by only ensuring required dependencies are installed for the environment your app is running in.
There’s trade-offs to consider here though because if you choose to build without development and test dependencies then you won’t be able to run your test suite so you may find yourself building an image in CI to run your tests and if they pass then build another image again without development + test dependencies which is the image you ship to production.
That could add time to your CI pipeline and also produce issues in production because maybe you put a dependency in the development section when it’s referenced in production. I’ve seen this happen with clients in the past where a Faker package was used to generate placeholder information in production but it was classified as a development dependency.
If your Docker image is built with all dependencies this is ok but if you skip development dependencies then suddenly you have a run-time error in production when someone accesses that specific feature on your site.
With that said, if you’re optimizing for tiny images with the least amount of attack surface then this could very well be worth it in your case. Personally I like to keep things a bit easier to manage by building, testing and shipping that image so by default I tend to skip this optimization.
Identical vs. completely different Dockerfiles
I tend to not use either of these solutions but that doesn’t mean they don’t have advantages of aspects that are worth considering.
If you have no behavior changes at build-time between environments then you could end up with an identical Dockerfile and images across environments. That’s nice knowing what you see is what you get!
As for completely different Dockerfiles, I find the maintenance burden to be too high since you need to keep multiple files in sync with changes you might be making. Also, it potentially allows for creating habits where it’s easy to add something to prod but not dev and you could end up with an even bigger drift between environments.
On that note, if you have a bunch of behavioral changes between environments it might be cleaner to split them into separate files or at least separate multi-build stages instead of adding a ton of build arg conditionals.
Like anything, there’s a balance depending on your use case, but in our opinion you can’t go too wrong by defaulting to a single Dockerfile using multi-stage builds where applicable while sprinkling in a few behavior change conditions as a default mode of thinking.
Making your codebase multi-environment friendly
To set yourself up for better dev/prod parity, there are a few things you’ll want to keep in mind during development. Most of this will only touch your Dockerfile(s) and Compose file.
Your code itself shouldn’t change!
Code itself should be written to be infrastructure agnostic. Code that requires updates/patches before getting porting over to different environments isn’t maintainable and is just overall poor practice. This means that all of your app config should instead live in your Dockerfile(s), Compose file(s), Makefiles, lockfiles, etc.
Conveniently, you can use env vars to toggle logic and behaviors at different environment stages. For example, if you want to add more logging during development builds, you can check your NODE_ENV
value and add logs that are conditional to it being set to development
.
One of the biggest codebase faux pas that we see is hardcoded URLs in API calls. This breaks things between environments, as it can route requests to the wrong services (e.g. ending up with a call to your staging API in production) or just fail altogether. Instead, you can use an env var to dynamically set the base URL.
Set your UID and GID
One of the root causes of code changes breaking in production is that your UID and GID values are now entirely different! Many devs don’t set these at the local level, so when their production GID/UIDs change, they see unexpected behavior from the different permissions.
On Linux and Mac, you can check your current UID and GID with id -u
and id -g
, respectively.
You can set these in your Dockerfile to keep them consistent across builds:
ARG GID=1000
ARG UID=1000
You’ll need to configure these for your user in the container as well. In the case of a node app, you’ll set them this way:
RUN groupmod -g "${GID}" node && usermod -u "${UID}" -g "${GID}" node
Use your env vars
At Shipyard, we’re biased towards using a single Dockerfile for all builds, and changing env var values to adapt to different environments. There’s a lot of behavior you can alter to improve performance and security. We’d recommend creating env vars for:
- number of workers/threads
- API tokens (and change values for dev, staging, and production)
- log toggling
- base URL
- environment stage (dev, prod, etc.)
Even though Docker remediates much of the config drift and parity issues, you’re still running your local dev env on a very different machine from your remote environments. Don’t be afraid to play around with env vars for each stage. Keeping track of these changes via env vars makes it much easier to document differences between builds.
Use the same web server for dev and prod
Even though it’s easier short-term, using different web servers across environments leads to parity issues down the road. Many teams use lightweight servers like webpack-dev-server
, Django’s runserver
, or Node’s built-in HTTP module for local dev, then switch to Apache, Nginx, or IIS in production.
The differences are usually pretty subtle but get annoying over time; you’ll see this in:
- how requests are handled
- how static files get served
- caching behaviors
- SSL/TSL handling (cookies/security headers)
You’ll want to include the production web server in your local environment (e.g. if you’re using Nginx in prod, use the same service in your local Docker Compose definition).
Docker Compose? Done.
Looking to use a your dev Dockerfile(s) and Compose file across all your environments? Shipyard makes that easy. We take that config and give you full-stack, automated ephemeral environments for every branch/PR. If it runs locally, it runs on Shipyard!
Don’t just take our word for it, see for yourself. Kick off a 30-day free trial and build/test/deploy faster than ever.