Database migrations with Node.js and PostgreSQL

Notes on how I’ve set up database migrations for an existing project that was running in production. Until then all database setup had been manual and undocumented.

Goals:

  • Being able to write and apply migrations
  • Automating applying db-migrations when deploying code
  • Beeing able to start up a fresh database locally and load it with the correct schema

Choosing the right tool for the job

After having a looked at Prisma, db-migrate and pg-migrate I decided to go with pg-migrate. Prisma has great migration tools, but comes with a whole ORM-layer echosystem that you have to buy in to. db-migrate documentation didn’t seem to great from a quick look. Both Prisma and db-migrate is written to handle many different databases, which means some Postgres datatypes will not translate 1:1 with the migrations/model you write. This is why I’ve chosen to go with pg-migrate.

From pg-migrate “Explanation and Goals” section in their README (https://github.com/salsita/node-pg-migrate):

Why only Postgres? - By writing this migration tool specifically for postgres instead of accommodating many databases, we can actually provide a full featured tool that is much simpler to use and maintain. I was tired of using crippled database tools just in case one day we switch our database.

Async / Sync - Everything is async in node, and that’s great, but a migration tool should really just be a fancy wrapper that generates SQL. […]

Naming / Raw Sql - Many tools force you to use their constants to do things like specify data types. Again, this tool should be a fancy wrapper that generates SQL[…]

That’s exactly the kind of competent, light weight tool that doesn’t get in the way of existing processses that I was looking for.

Acquire the current schema

The first thing we need to do is get the current schema from the production database. This will be our first migration.

mkdir migrations
pg_dump --schema-only --no-acl $DATABASE_URL > migrations/0000000000000-init.sql

Install pg-migrate

We need to install node-pg-migrate.

npm install node-pg-migrate

Add the following to package.json.

"scripts": {
    [...]
    "migrate:up": "node-pg-migrate up",
    "migrate:down": "node-pg-migrate down",
    "migrate:redo": "node-pg-migrate redo",
    "migrate:create": "node-pg-migrate create"
  },

Get a fresh database running locally

docker-compose.yml:

version: "3"
services:
  db:
    image: "postgres:13"
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=postgres
    ports:
      - "5432:5432"
    volumes:
      - postgres:/var/lib/postgresql/data
volumes:
  postgres:

Run the new database with

docker-compose up

Set the database url environment variable

export DATABASE_URL=postgres://postgres:postgres@localhost:5432/postgres

Now we can run our migration against the db:

node-pg-migrate up

And with that, we now have an empty database running locally with the same schema and tables as our production database.

Write migrations

Create a new migration with

yarn migrate:create my migration

node-pg-migrate will use the file format of the previous migration if you don’t specify one. So our new migration will be and .sql file like our first. This is fine if you prefer writing plain SQL. You can specify that you want to use TypeScript or JavaScript with --migration-filename-format (js, ts or sql).

Example migration:

-- Up Migration

CREATE TABLE links (
  id SERIAL PRIMARY KEY,
  url VARCHAR(255) NOT NULL,
  name VARCHAR(255) NOT NULL,
  description VARCHAR (255),
  last_update DATE
);

INSERT INTO links (url, name)

VALUES('https://synvinkel.org','Synvinkel');

-- Down Migration
DROP TABLE links;

Apply your new migration with:

yarn migrate:up

To make sure that your down migration works as expected you can run

yarn migrate:redo

This will run one down migration and one up migration. If your up migration fails when running redo the down migration you’ve specified isn’t working correctly and needs to be adjusted.

Production

To make full use of the migrations we must make sure they run against our production database whenever we deploy new code. If the migration fails running the build should fail.

The following sections are specific to deploying with Google Cloud Build.

Automating applying database migrations when deploying code

Exemple cloudbuild.yaml for deploying a Docker image to Cloud Run. Build the image, push the image to the registry, run migrations and finally deploy. If any step fails the entire build will fail.

cloudbuild.yaml:

steps:
  # Build Dockerfile
  - name: gcr.io/cloud-builders/docker
    args: ['build', '-t', 'gcr.io/$PROJECT_ID/api:${SHORT_SHA}', '.']

  # Push Dockerfile to image registry
  - name: 'gcr.io/cloud-builders/docker'
    args: ['push', 'gcr.io/$PROJECT_ID/api']

  # Use exec-wrapper to run migrations
  - name: "gcr.io/google-appengine/exec-wrapper"
    args: ["-i", "gcr.io/$PROJECT_ID/api:${SHORT_SHA}",
          "-e", "DATABASE_URL=${_DATABASE_URL}",
          "-s", "${_CLOUD_SQL_INSTANCE}",
          "--", "node", "node_modules/node-pg-migrate/bin/node-pg-migrate", "up"]
  
  # Deploy
  - name: 'gcr.io/cloud-builders/gcloud'
    args:
      [
        'run',
        'deploy',
        'api',
        '--platform',
        'managed',
        '--image',
        'gcr.io/$PROJECT_ID/api:${SHORT_SHA}',
        '--region',
        'europe-west1'
      ]

Contact

  • johnnie@synvinkel.org
  • Synvinkel Aktiebolag
    Fredagsvägen 13
    123 60 Farsta
  • Org.nr:
    559289-1583
  • VAT nr:
    SE559289158301