Continuous integration often only happens after a commit or a PR, in a remote environment. However, there’s a ton of benefit from designing these pipelines at the local level, and running them early on. This way, you’re using the same steps throughout the SDLC, but running them across different environments, which helps show incompatibilities between your application and any new code changes. Here’s why you should keep your pipeline logic in Makefiles and use them from development to deployment.
Why local-first makes sense for CI/CD
Continuous integration isn’t something that needs to be restricted to remote pipelines. You can get a lot more mileage out of your CI/CD pipelines if you begin integration locally. If you’re able to resolve bugs uncovered by your pipeline right then and there on your machine, you’ll save a lot of time (and CI runner costs).
The core philosophy of continuous integration is keeping code changes small and modular. That way, you can test each new one individually and thoroughly, and stay pretty aware how it affects your codebase. CI also emphasizes moving fast to avoid merge conflicts or a stale trunk.
The best way to do this is to find out how your code changes fare early on: at the local level. That way, you’re keeping your commit history cleaner, and only pushing changes that pass local integration steps. This way, you can stay in the inner loop, take more risks, and trust your code changes more, instead of feeling uncertain until they hit a remote branch and trigger your cloud-hosted CI/CD.
Keeping your logic outside of the pipelines
You don’t want to spend time building out your CI/CD logic within the pipeline itself. Someday, you might change CI/CD platform providers, and it can take months to transpose one YAML spec to a different YAML spec.
The best practice for writing pipelines is treating them as simply a wrapper for your build/test/deploy steps. This way, you can have a single source of truth for both your local and remote workflows, instead of having fully separate pipelines for every stage. This makes maintaining CI/CD much more straightforward, and helps guide you towards good pipeline design principles.
Why Makefiles?
The Makefile spec has been around since 1976, and is still considered one of the best methods to define build workflows.
A Makefile serves as a collection of scripts linked to make
aliases. For example, if you want to start your app’s PostgreSQL database, you’d run make postgres.start
, and in your Makefile, you’ve defined that alias like this:
postgres.start:
docker compose up -d postgres
docker compose exec postgres \
sh -c 'while ! nc -z postgres 5432; do sleep 0.1; done'
Makefiles make it easy to ensure your build steps stay consistent. Instead of memorizing (or copy-pasting) several steps, you can invoke the same steps in the same order, every time.
And these steps will likely not change too much throughout your SDLC. The main difference will be the infrastructure they’re run against (e.g. a small local dev environment vs. a staging environment that approximates all production services).
If you want to learn more about the beauty of Makefiles, check out our post Makefiles for Modern Development”.
Using your remote pipeline as a wrapper for your Makefile
A Makefile brings all your tooling together into a centralized spec. You’re invoking different commands and using different tools, so it really is a wrapper in itself. This means that you’ll want your Makefile to be polished and up-to-date, since it should act as your single-source-of-truth for dev/test workflows.
Once you have that ready to go, your CI/CD YAML should be really straightforward to write. Essentially, it’ll serve as a wrapper for your Makefile. In this example, we’re using make
commands in a GitHub Actions workflow, where each step invokes a command from the Makefile. This keeps it consistent: you know exactly what is happening in each step, because this is a similar workflow to the one you’ve used during prior development checkpoints (e.g. locally). You can solve for dev/prod parity issues by seeing where the pipeline fails (you already know these exact steps have succeeded on your machine, so it’ll be easier to narrow down the point of failure).
on:
push:
branches: [dev]
jobs:
ci:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install dependencies
run: make deps
- name: Lint code
run: make lint
- name: Run tests
run: make test
- name: Build app
run: make build
- name: Build Docker image
run: make docker.build
- name: Push Docker image
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: make docker.push
Using a Makefile as your local CI/CD pipeline
Once your CI/CD logic is already living in your Makefile, using anything but a Makefile is overkill for most local CI/CD workflows. Why add yet another wrapper? Your Makefile can invoke and stack make
targets, so you can bundle a few together to run your CI/CD steps in order. You can also define a few different variants (e.g. defining one workflow that seeds the database and runs tests, and another that just lints and builds).
In your Makefile, you can group your existing targets together:
all: deps lint test build
And then you simply run a single make
command to execute the entire local CI/CD process:
make all
Better pipelines == better production
When you’ve finetuned your CI/CD pipeline, you’ll start getting more value from it by using it at every major SDLC stage. As your code changes go through different “gates” and environments, your CI/CD can stress test and ensure that they’re production-ready.
And if you want to run CI/CD against full-stack, production-like environments on every code change, Shipyard has you covered. It manages the lifecycle of ephemeral environments through GitOps, so they spin up when you open a PR, update when you make a commit, and spin down automatically on a merge or timeout. It’s that easy. This way, you can run your full E2E test suite whenever you need to, and do CI/CD the right way. Try it free for 30 days, or jump on a call and we’ll help you get set up.