Andrew Welch · Insights · #devops #frontend #docker

Published , updated · 5 min read ·


Please consider 🎗 sponsoring me 🎗 to keep writing articles like this.

An Annotated Docker Config for Frontend Web Development

A local devel­op­ment envi­ron­ment with Dock­er allows you to shrink-wrap the devops your project needs as con­fig, mak­ing onboard­ing frictionless

An annotated docker config for frontend web development

Dock­er is a tool for con­tainer­iz­ing your appli­ca­tions, which means that your appli­ca­tion is shrink-wrapped with the envi­ron­ment that it needs to run.

This allows you to define the devops your appli­ca­tion needs in order to run as con­fig, which can then be eas­i­ly repli­cat­ed and reused.

The principals and approach discussed in this article are universal.

While there are many uses for Dock­er, this arti­cle will focus on using Dock­er as a local envi­ron­ment for fron­tend web development.

Although Craft CMS is ref­er­enced in this arti­cle, Dock­er works well for any kind of web devel­op­ment with any kind of CMS or dev stack (Lar­avel, Node­JS, Rails, whatevs).

The Dock­er con­fig used here is used in both the dev​Mode​.fm GitHub repo, and in the nystudio107/​craft boil­er­plate Com­pos­er project if you want to see some in the wild” examples.

The Dock­er con­fig on its own can be found at nys­tu­dio107/­dock­er-images, and the pre-built base images are up on Dock­er­Hub.

Link Why Docker?

If you’re doing fron­tend web devel­op­ment, you very like­ly already have some kind of a local devel­op­ment environment. 

So why should you use Docker instead?

This is a very rea­son­able ques­tion to ask, because any kind of switch of tool­ing requires some upskilling, and some work.

Docker containers frontend web development

I’ve long been using Home­stead—which is real­ly just a cus­tom Vagrant box with some extras — as my local dev envi­ron­ment as dis­cussed in the Local Devel­op­ment with Vagrant / Home­stead article.

I’d cho­sen to use Home­stead because I want­ed a local dev envi­ron­ment that was deter­min­is­tic, dis­pos­able, and sep­a­rat­ed my devel­op­ment envi­ron­ment from my actu­al computer.

Local dev comparison docker homestead

Local devel­op­ment comparison

Dock­er has all of these advan­tages, but also a much more light­weight approach. Here are the advan­tages of Dock­er for me:

  • Each appli­ca­tion has exact­ly the envi­ron­ment it needs to run, includ­ing spe­cif­ic ver­sions of any of the plumb­ing need­ed to get it to work (PHP, MySQL, Post­gres, whatever)
  • Onboard­ing oth­ers becomes triv­ial, all they need to do is install Dock­er and type docker-compose up and away they go
  • Your devel­op­ment envi­ron­ment is entire­ly dis­pos­able; if some­thing goes wrong, you just delete it and fire up a new one
  • Your local com­put­er is sep­a­rate from your devel­op­ment envi­ron­ment, so switch­ing com­put­ers is triv­ial, and you won’t run into issues where you hose your com­put­er or are stuck with con­flict­ing ver­sions of devops services
  • The cost of try­ing dif­fer­ent ver­sions of var­i­ous ser­vices is low; just change a num­ber in a .yaml file, docker-compose up, and away you go

There are oth­er advan­tages as well, but these are the more impor­tant ones for me.

Addi­tion­al­ly, con­tainer­iz­ing your appli­ca­tion in local devel­op­ment is a great first step to using a con­tainer­ized deploy­ment process, and run­ning Dock­er in pro­duc­tion as well.

A dis­ad­van­tage with any kind of vir­tu­al­iza­tion is per­for­mance, but that can be mit­i­gat­ed by hav­ing mod­ern hard­ware, a bunch of mem­o­ry, and opti­miz­ing Dock­er via the Per­for­mance Tun­ing Dock­er for Mac article.

Link Understanding Docker

This arti­cle is not a com­pre­hen­sive tuto­r­i­al on Dock­er, but I will attempt to explain some of the more impor­tant, broad­er concepts.

Dock­er has the notion of con­tain­ers, each of which run one or more ser­vices. You can think of each con­tain­er as a mini vir­tu­al machine (even though tech­ni­cal­ly, they are not).

While you can run mul­ti­ple ser­vices in a sin­gle Dock­er con­tain­er, sep­a­rat­ing each ser­vice out into a sep­a­rate con­tain­er has many advantages.

Docker containers separate services

If PHP, Apache, and MySQL are all in sep­a­rate con­tain­ers, they won’t affect each oth­er, and also can be more eas­i­ly swapped in and out.

If you decide you want to use Nginx or Post­gres instead, the decou­pling into sep­a­rate con­tain­ers makes it easy!

Dock­er con­tain­ers are built from Dock­er images, which can be thought of as a recipe for build­ing the con­tain­er, with all of the files and code need­ed to make it happen.

If a Docker image is the recipe, a Docker container is the finished resulting meal.

Dock­er images almost always are lay­ered on top of oth­er exist­ing images that they extend FROM. For instance, you might have a base image from Ubun­tu or Alpine Lin­ux which pro­vide in the nec­es­sary oper­at­ing sys­tem ker­nel lay­er for oth­er process­es like Nginx to run.

Docker image layers

Dock­er Image Layers

This lay­er­ing works thanks to the Union file sys­tem, which han­dles com­pos­ing all the lay­ers of the cake togeth­er for you.

We said ear­li­er that Dock­er is more light­weight than run­ning a full Vagrant VM, and it is… but unfor­tu­nate­ly, unless you’re run­ning Lin­ux there still is a vir­tu­al­iza­tion lay­er run­ning, which is Hyper­K­it for the Mac, and Hyper‑V for Windows.

Docker for Mac & Windows still has a virtualization layer, it’s just relatively lightweight.

For­tu­nate­ly, you don’t need to be con­cerned with any of this, but the per­for­mance impli­ca­tions do inform some of the deci­sions we’ve made in the Dock­er con­fig pre­sent­ed here.

For more infor­ma­tion on Dock­er, for that I’d high­ly rec­om­mend the Dock­er Mas­tery course (if it’s not on sale now, don’t wor­ry, it will be at some point) and also the fol­low­ing dev​Mode​.fm episodes:

…and there are tons of oth­er excel­lent edu­ca­tion­al resources on Dock­er out there such as Matt Gray’s Craft in Dock­er: Every­thing I’ve Learnt pre­sen­ta­tion, and his excel­lent A Craft CMS Devel­op­ment Work­flow With Dock­er series.

In our arti­cle, we will focus on anno­tat­ing a real-world Dock­er con­fig that’s used in pro­duc­tion. We’ll dis­cuss var­i­ous Dock­er con­cepts as we go, but the pri­ma­ry goal here is doc­u­ment­ing a work­ing config.

This article is what I wished existed when I started learning Docker

I learn best by look­ing at a work­ing exam­ple, and pick­ing it apart. If you do, too, let’s get going!

Link My Docker Directory Structure

This Dock­er set­up uses a direc­to­ry struc­ture that looks like this (don’t wor­ry, it’s not as com­plex as it seems, many of the Dock­er images here are for ref­er­ence only, and are actu­al­ly pre-built):

├── buddy.yml
├── buildchain
│   ├── package.json
│   ├── package-lock.json
│   ├── postcss.config.js
│   ├── tailwind.config.js
│   ├── webpack.common.js
│   ├── webpack.dev.js
│   ├── webpack.prod.js
│   └── webpack.settings.js
├── CHANGELOG.md
├── cms
│   ├── composer.json
│   ├── composer.lock
│   ├── config
│   ├── craft
│   ├── craft.bat
│   ├── example.env
│   ├── modules
│   ├── storage
│   ├── templates
│   ├── vendor
│   └── web
├── db-seed
│   ├── db_seed.sql
├── docker-compose.yml
├── docker-config
│   ├── mariadb
│   │   └── Dockerfile
│   ├── nginx
│   │   ├── default.conf
│   │   └── Dockerfile
│   ├── node-dev-base
│   │   └── Dockerfile
│   ├── node-dev-webpack
│   │   └── Dockerfile
│   ├── php-dev-base
│   │   ├── Dockerfile
│   │   ├── xdebug.ini
│   │   └── zzz-docker.conf
│   ├── php-dev-craft
│   │   └── Dockerfile
│   ├── php-prod-base
│   │   ├── Dockerfile
│   │   └── zzz-docker.conf
│   ├── php-prod-craft
│   │   ├── Dockerfile
│   │   └── run_queue.sh
│   ├── postgres
│   │   └── Dockerfile
│   └── redis
│       └── Dockerfile
├── migrations
├── scripts
│   ├── common
│   ├── docker_prod_build.sh
│   ├── docker_pull_db.sh
│   ├── docker_restore_db.sh
│   └── example.env.sh
├── src
│   ├── conf
│   ├── css
│   ├── img
│   ├── js
│   ├── php
│   ├── templates -> ../cms/templates
│   └── vue
└── tsconfig.json

Here’s an expla­na­tion of what the top-lev­el direc­to­ries are:

  • cms — every­thing need­ed to run Craft CMS. The is the app” of the project
  • docker-config — an indi­vid­ual direc­to­ry for each ser­vice that the Dock­er set­up uses, with a Dockerfile and oth­er ancil­lary con­fig files therein
  • scripts — helper shell scripts that do things like pull a remote or local data­base into the run­ning Dock­er con­tain­er. These are derived from the Craft-Scripts shell scripts
  • src — the fron­tend JavaScript, CSS, Vue, etc. source code that the project uses

Each ser­vice is ref­er­enced in the docker-compose.yaml file, and defined in the Dockerfile that is in the cor­re­spond­ing direc­to­ry in the docker-config/ directory.

It isn’t strict­ly nec­es­sary to have a sep­a­rate Dockerfile for each ser­vice, if they are just derived from a base image. But I like the con­sis­ten­cy, and ease of future expan­sion should some­thing cus­tom be nec­es­sary down the road.

You’ll also notice that there are php-dev-base and php-dev-craft direc­to­ries, as well as node-dev-base and node-dev-webpack direc­to­ries, and might be won­der­ing why they aren’t consolidated.

The rea­son is that there’s a whole lot of the base set­up in both that just nev­er changes, so instead of rebuild­ing that each time, we can build it once and pub­lish the images up on Dock​er​Hub​.com as nys­tu­dio107/php-dev-base and nys­tu­dio107/n­ode-dev-base.

Then we can lay­er any­thing spe­cif­ic about our project on top of these base images in the respec­tive -craft ser­vices. This saves us sig­nif­i­cant build­ing time, while keep­ing flexibility.

Link The docker-compose.yaml file

While a docker-compose.yaml file isn’t required when using Dock­er, from a prac­ti­cal point of view, you’ll almost always use it. The docker-compose.yaml file allows you to define mul­ti­ple con­tain­ers for run­ning the ser­vices you need, and coor­di­nate start­ing them up and shut­ting them down in unison.

Then all you need to do is run docker-compose up via the ter­mi­nal in a direc­to­ry that has a docker-compose.yaml file, and Dock­er will start up all of your con­tain­ers for you!

Here’s an exam­ple of what that might look like, start­ing up your Dock­er containers:

Docker compose up terminal

Exam­ple dock­er-com­pose up output

Let’s have a look at our docker-compose.yaml file:

version: '3.7'

services:
  # nginx - web server
  nginx:
    build:
      context: ./docker-config/nginx
      dockerfile: ./Dockerfile
    env_file: &env
      - ./cms/.env
    init: true
    ports:
      - "8000:80"
    volumes:
      - cpresources:/var/www/project/cms/web/cpresources
      - ./cms/web:/var/www/project/cms/web:cached
  # php - run php-fpm
  php:
    build: &php-build
      context: ./docker-config/php-prod-craft
      dockerfile: ./Dockerfile
    depends_on:
      - "mariadb"
      - "redis"
    env_file:
      *env
    expose:
      - "9000"
    init: true
    volumes: &php-volumes
      - cpresources:/var/www/project/cms/web/cpresources
      - storage:/var/www/project/cms/storage
      - ./cms:/var/www/project/cms:cached
      - ./cms/vendor:/var/www/project/cms/vendor:delegated
      - ./cms/storage/logs:/var/www/project/cms/storage/logs:delegated
  # php - run php-fpm with xdebug
  php_xdebug:
    build:
      context: ./docker-config/php-dev-craft
      dockerfile: ./Dockerfile
    depends_on:
      - "php"
    env_file:
      *env
    expose:
      - "9000"
    init: true
    user: www-data
    volumes:
      *php-volumes
  # queue - runs queue jobs via php craft queue/listen
  queue:
    build:
      *php-build
    command: /var/www/project/run_queue.sh
    depends_on:
      - "php"
    env_file:
      *env
    expose:
      - "9001"
    init: true
    user: www-data
    volumes:
      *php-volumes
  # mariadb - database
  mariadb:
    build:
      context: ./docker-config/mariadb
      dockerfile: ./Dockerfile
    env_file:
      *env
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: project
      MYSQL_USER: project
      MYSQL_PASSWORD: project
    init: true
    ports:
      - "3306:3306"
    volumes:
      - db-data:/var/lib/mysql
      - ./db-seed/:/docker-entrypoint-initdb.d
  # redis - key/value database for caching & php sessions
  redis:
    build:
      context: ./docker-config/redis
      dockerfile: ./Dockerfile
    expose:
      - "6379"
    init: true
  # webpack - frontend build system
  webpack:
    build:
      context: ./docker-config/node-dev-webpack
      dockerfile: ./Dockerfile
    env_file:
      *env
    init: true
    ports:
      - "8080:8080"
    volumes:
      - ./tsconfig.json:/var/www/project/tsconfig.json:cached
      - ./buildchain:/var/www/project/buildchain:cached
      - ./buildchain/node_modules:/var/www/project/buildchain/node_modules:delegated
      - ./cms/web/dist:/var/www/project/cms/web/dist:delegated
      - ./src:/var/www/project/src:cached
      - ./cms/templates:/var/www/project/cms/templates:cached

volumes:
  db-data:
  cpresources:
  storage:

This .yaml file has 3 top-lev­el keys:

  • version — the ver­sion num­ber of the Dock­er Com­pose file, which cor­re­sponds to dif­fer­ent capa­bil­i­ties offered by dif­fer­ent ver­sions of the Dock­er Engine
  • services — each ser­vice cor­re­sponds to a sep­a­rate Dock­er con­tain­er that is cre­at­ed using a sep­a­rate Dock­er image
  • volumes — named vol­umes that are mount­ed and can be shared amongst your Dock­er con­tain­ers (but not your host com­put­er), for stor­ing per­sis­tent data

We’ll detail each ser­vice below, but there are a few inter­est­ing tid­bits to cov­er first.

BUILD

When you’re cre­at­ing a Dock­er con­tain­er, you can either base it on an exist­ing image (either a local image or one pulled down from Dock​er​Hub​.com), or you can build it local­ly via a Dockerfile.

As men­tioned above, I chose the method­ol­o­gy that each ser­vice would be cre­at­ing as a build from a Dockerfile (all of which extend FROM an image up on Dock​er​Hub​.com) to keep things consistent.

This means that some of our Dockerfiles we use are noth­ing more than a sin­gle line, e.g.: FROM mariadb:10.3, but this set­up does allow for expansion.

The two keys used for build are:

  • context — this spec­i­fies where the work­ing direc­to­ry for the build should be, rel­a­tive to the docker-compose.yaml file. This is set to the root direc­to­ry of each service
  • dockerfile — this spec­i­fies a path to the Dockerfile to use to build the ser­vice Dock­er con­tain­er. Think of the Dockerfile as a local Dock­er image

So the con­text is always the root direc­to­ry of each ser­vice, with the Dockerfile and any sup­port­ing files for each ser­vice are off in a sep­a­rate direc­to­ry. We do it this way so that we’re not pass­ing down more than is need­ed when build­ing the Dock­er images, which slows down the build process sig­nif­i­cant­ly (thanks to Mizux Sei­ha & Patrick Har­ring­ton for point­ing this out!).

DEPENDS_ON

This just lets you spec­i­fy what oth­er ser­vices this par­tic­u­lar ser­vice depends on; this allows you to ensure that oth­er con­tain­ers are up and run­ning before this con­tain­er starts up.

ENV_FILE

The env_file set­ting spec­i­fies a path to your .env file for key/​value pairs that will be inject­ed into a Dock­er container.

Dock­er does not allow for quotes in its .env file, which is con­trary to how .env files work almost every­where else… so remove any quotes you have in your .env file.

You’ll notice that for the nginx ser­vice, there’s a strange &env val­ue in the env_file set­ting, and for the oth­er ser­vices, the set­ting is *env. This is tak­ing advan­tage of YAML alias­es, so if we do change the .env file path, we only have to do it in one place.

Doing it this way also ensures that all of the .env envi­ron­ment vari­ables are avail­able in every con­tain­er. For more on envi­ron­ment vari­ables, check out the Flat Mul­ti-Envi­ron­ment Con­fig for Craft CMS 3 article.

Because it’s Dock­er that is inject­ing these .env envi­ron­ment vari­ables, if you change your .env file, you’ll need to restart your Dock­er containers.

INIT

Set­ting init: true for an image caus­es sig­nals to be for­ward­ed to the process, which allows them to ter­mi­nate quick­ly when you halt them with Control-C.

LINKS

Links in Dock­er allow you to define extra alias­es by which a ser­vice is reach­able from anoth­er ser­vice. They are not required to enable ser­vices to com­mu­ni­cate, but I like being explic­it about it.

The come into play when one con­tain­er needs to talk to anoth­er. For exam­ple, if you nor­mal­ly would com­mu­ni­cate with your data­base via the localhost sock­et, instead in our set­up you’d use the sock­et named mariadb.

The key take-away is that when con­tain­ers need to talk to each oth­er, they are doing so over the inter­nal Dock­er net­work, and refer to each oth­er via their service or links name.

PORTS

This spec­i­fies the port that should be exposed out­side of the con­tain­er, fol­lowed by the port that the con­tain­er uses inter­nal­ly. So for exam­ple, the nginx ser­vice has "8000:80", which means the exter­nal­ly acces­si­ble port for the Nginx web­serv­er is 8000, and the inter­nal port the ser­vice runs on is 80.

If this sounds con­fus­ing, under­stand that Dock­er uses its own inter­nal net­work to allow con­tain­ers to talk to each oth­er, as well as the out­side world.

VOL­UMES

Dock­er con­tain­ers run in their own lit­tle world, which is great for iso­la­tion pur­pos­es, but at some point you do need to share things from your host” com­put­er with the Dock­er container.

Dock­er vol­umes allow you to do this. You spec­i­fy either a named vol­ume or a path on your host, fol­lowed by the path where this vol­ume should be bind mount­ed in the Dock­er container.

This is where per­for­mance prob­lems can hap­pen with Dock­er on the Mac and Win­dows. So we use some hints to help with per­for­mance:

  • consistent — per­fect con­sis­ten­cy (host and con­tain­er have an iden­ti­cal view of the mount at all times)
  • cached — the host’s view is author­i­ta­tive (per­mit delays before updates on the host appear in the container)
  • delegated — the container’s view is author­i­ta­tive (per­mit delays before updates on the con­tain­er appear in the host)

So for things like node_modules/ and vendor/ we mark them as :del­e­gat­ed because while we want them shared, the con­tain­er is in con­trol of mod­i­fy­ing these volumes.

Some Dock­er setups I’ve seen put these direc­to­ries into a named vol­ume, which means they are vis­i­ble only to the Dock­er containers.

But the prob­lem is we lose out on our edi­tor auto-com­ple­tion, because our edi­tor has noth­ing to index.

This is a non-negotiable for me

See the Auto-Com­plete Craft CMS 3 APIs in Twig with Php­Storm arti­cle for details.

Link Service: Nginx

Nginx is the web serv­er of choice for me, both in local dev and in production. 

FROM nginx:1.16

COPY ./default.conf /etc/nginx/conf.d/default.conf

We’ve based the con­tain­er on the nginx image, tagged at ver­sion 1.16

The only mod­i­fi­ca­tion it makes is COPYing our default.conf file into place:

# default Docker DNS server
resolver 127.0.0.11;

map $cookie_XDEBUG_SESSION $my_fastcgi_pass {
    default php_xdebug;
    '' php;
}

server {
    listen 80;
    listen [::]:80;

    server_name _;
    root /var/www/project/cms/web;
    index index.html index.htm index.php;
    charset utf-8;

    gzip_static  on;

    ssi on;

    client_max_body_size 0;

    error_page 404 /index.php?$query_string;

    access_log off;
    error_log /dev/stdout info;

    location = /favicon.ico { access_log off; log_not_found off; }

    location / {
        try_files $uri/index.html $uri $uri/ /index.php?$query_string;
    }

    location ~ [^/]\.php(/|$) {
        try_files $uri $uri/ /index.php?$query_string;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass $my_fastcgi_pass:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        fastcgi_param DOCUMENT_ROOT $realpath_root;
        fastcgi_param HTTP_PROXY "";

        add_header Last-Modified $date_gmt;
        add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
        if_modified_since off;
        expires off;
        etag off;

        fastcgi_intercept_errors off;
        fastcgi_buffer_size 16k;
        fastcgi_buffers 4 16k;
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
    }
}

This is just a sim­ple Nginx con­fig that works well with Craft CMS. You can find more about Nginx con­figs for Craft CMS in the nginx-craft GitHub repo.

Link Service: MariaDB

Mari­aDB is a drop-in replace­ment for MySQL that I tend to use instead of MySQL itself. It was writ­ten by the orig­i­nal author of MySQL, and is bina­ry com­pat­i­ble with MySQL.

FROM mariadb:10.3

We’ve based the con­tain­er on the mari­adb image, tagged at ver­sion 10.3

There’s no mod­i­fi­ca­tion at all to the source image.

When the con­tain­er is start­ed for the first time, it will exe­cute files with exten­sions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d so we can use this to seed the ini­tial data­base. See Ini­tial­iz­ing a fresh instance.

Link Service: Postgres

Post­gres is a robust data­base that I am using more and more for Craft CMS projects. It’s not used in the docker-compose.yaml pre­sent­ed here, but I keep the con­fig­u­ra­tion around in case I want to use it.

Post­gres is used in local dev and in pro­duc­tion on the dev​Mode​.fm GitHub repo, if you want to see it implemented.

FROM postgres:12.2

We’ve based the con­tain­er on the post­gres image, tagged at ver­sion 12.2

There’s no mod­i­fi­ca­tion at all to the source image.

When the con­tain­er is start­ed for the first time, it will exe­cute files with exten­sions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d so we can use this to seed the ini­tial data­base. See Ini­tial­iza­tion scripts.

Link Service: Redis

Redis is a key/​value pair data­base that I set all of my Craft CMS installs to use both as a caching method, and as a ses­sion han­dler for PHP.

FROM redis:5.0

We’ve based the con­tain­er on the redis image, tagged at ver­sion 5.0

There’s no mod­i­fi­ca­tion at all to the source image.

Link Service: php

PHP is the lan­guage that the Yii2 frame­work and Craft CMS itself is based on, so we need it in order to run our app.

This ser­vice is com­posed of a base image that con­tains all of the pack­ages and PHP exten­sions we’ll always need to use, and then the project-spe­cif­ic image that con­tains what­ev­er addi­tion­al things are need­ed for our project.

FROM php:7.3-fpm

# Install packages
RUN apt-get update \
    && \
    # apt Debian packages
    apt-get install -y \
        apt-utils \
        autoconf \
        ca-certificates \
        curl \
        g++ \
        libbz2-dev \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libpng-dev \
        libpq-dev \
        libssl-dev \
        libicu-dev \
        libmagickwand-dev \
        libzip-dev \
        unzip \
        zip \
    && \
    # pecl PHP extensions
    pecl install \
        imagick-3.4.4 \
        redis \
    && \
    # Configure PHP extensions
    docker-php-ext-configure \
        gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        bcmath \
        bz2 \
        exif \
        ftp \
        gettext \
        gd \
        iconv \
        intl \
        mbstring \
        opcache \
        pdo \
        shmop \
        sockets \
        sysvmsg \
        sysvsem \
        sysvshm \
        zip \
    && \
    # Enable PHP extensions
    docker-php-ext-enable \
        imagick \
        redis \
    # Clean apt repo caches that don't need to be part of the image
    && \
    apt-get clean \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Append our php.ini overrides to the end of the file
RUN echo "upload_max_filesize = 10M" > /usr/local/etc/php/php.ini && \
    echo "post_max_size = 10M" >> /usr/local/etc/php/php.ini && \
    echo "max_execution_time = 300" >> /usr/local/etc/php/php.ini && \
    echo "memory_limit = 256M" >> /usr/local/etc/php/php.ini && \
    echo "opcache.revalidate_freq = 0" >> /usr/local/etc/php/php.ini && \
    echo "opcache.validate_timestamps = 1" >> /usr/local/etc/php/php.ini

# Copy the `zzz-docker.conf` file into place for php-fpm
COPY ./zzz-docker.conf /usr/local/etc/php-fpm.d/zzz-docker.conf

We’ve based the con­tain­er on the php image, tagged at ver­sion 7.3

We’re then adding a bunch of Debian pack­ages that we want avail­able for our Ubun­tu oper­at­ing sys­tem base, some debug­ging tools, as well as some PHP exten­sions that Craft CMS requires.

Then we copy into place the zzz-docker.conf file:

[www]
pm.max_children = 10
pm.process_idle_timeout = 10s
pm.max_requests = 1000

This just sets some defaults for php-fpm that make sense for local development.

By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it avail­able as nys­tu­dio107/php-dev-base on DockerHub.

Since it’s pre-built, we don’t have to build it every time, and can lay­er on top of this image any­thing project-spe­cif­ic via the php-prod-craft con­tain­er image:

FROM nystudio107/php-prod-base

# Install packages
RUN apt-get update \
    && \
    # apt Debian packages
    apt-get install -y \
        nano \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        pdo_mysql \
    && \
    # Install Composer
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer \
    # Clean apt repo caches that don't need to be part of the image
    && \
    apt-get clean \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

WORKDIR /var/www/project

COPY ./run_queue.sh .
RUN chmod a+x run_queue.sh

# Create the storage directory and make it writeable by PHP
RUN mkdir -p /var/www/project/cms/storage && \
    mkdir -p /var/www/project/cms/storage/runtime && \
    chown -R www-data:www-data /var/www/project/cms/storage

# Create the cpresources directory and make it writeable by PHP
RUN mkdir -p /var/www/project/cms/web/cpresources && \
    chown -R www-data:www-data /var/www/project/cms/web/cpresources

WORKDIR /var/www/project/cms

# Force permissions, update Craft, and start php-fpm

# Do a `composer install` without running any Composer scripts
# - If `composer.lock` is present, it will install what is in the lock file
# - If `composer.lock` is missing, it will update to the latest dependencies
#   and create the `composer.lock` file
# This automatic running adds to the startup overhead of `docker-compose up`
# but saves far more time in not having to deal with out of sync versions
# when working with teams or multiple environments
CMD composer install --verbose --no-progress --no-scripts --optimize-autoloader --no-interaction \
    && \
    chown -R www-data:www-data /var/www/project/cms/vendor \
    && \
    chown -R www-data:www-data /var/www/project/cms/storage \
    && \
    composer craft-update \
    && \
    php-fpm

This is the image that we actu­al build into a con­tain­er, and use for our project. We install the nano edi­tor because I find it handy some­times, and we also install pdo_mysql so that PHP can con­nect to our Mari­aDB database.

We do it this way so that if we want to cre­ate a Craft CMS project that uses Post­gres, we can just swap in the PDO exten­sion need­ed here.

Then we make sure the var­i­ous storage/ and cpresources/ direc­to­ries are in place, with the right own­er­ship so that Craft will run properly.

Then do a composer install every time the Dock­er con­tain­er is start­ed up. While this takes a lit­tle more time, it makes things a whole lot eas­i­er when work­ing with teams or on mul­ti­ple environments.

We have to do the composer install as part of the Dock­er image CMD because the file sys­tem mounts aren’t in place until the CMD is run.

This allows us to update our Com­pos­er depen­den­cies just by delet­ing the composer.lock file, and doing docker-compose up

Sim­ple.

The alter­na­tive is doing a docker exec -it craft_php_1 /bin/bash to open up a shell in our con­tain­er, and run­ning the com­mand man­u­al­ly. Which is fine, but a lit­tle con­vo­lut­ed for some.

Then we make sure that the own­er­ship on impor­tant direc­to­ries is cor­rect, and we run the craft-update Com­pos­er script:

{
  "require": {
    "craftcms/cms": "^3.4.0",
    "vlucas/phpdotenv": "^3.4.0",
    "yiisoft/yii2-redis": "^2.0.6",
    "nystudio107/craft-imageoptimize": "^1.0.0",
    "nystudio107/craft-fastcgicachebust": "^1.0.0",
    "nystudio107/craft-minify": "^1.2.5",
    "nystudio107/craft-typogrify": "^1.1.4",
    "nystudio107/craft-retour": "^3.0.0",
    "nystudio107/craft-seomatic": "^3.2.0",
    "nystudio107/craft-webperf": "^1.0.0",
    "nystudio107/craft-twigpack": "^1.1.0"
  },
  "autoload": {
    "psr-4": {
      "modules\\sitemodule\\": "modules/sitemodule/src/"
    }
  },
  "config": {
    "sort-packages": true,
    "optimize-autoloader": true,
    "platform": {
      "php": "7.0"
    }
  },
  "scripts": {
    "craft-update": [
      "@pre-craft-update",
      "@post-craft-update"
    ],
    "pre-craft-update": [
    ],
    "post-craft-update": [
      "@php craft install/check && php craft clear-caches/all || return 0",
      "@php craft install/check && php craft migrate/all || return 0",
      "@php craft install/check && php craft project-config/apply || return 0"
    ],
    "post-root-package-install": [
      "@php -r \"file_exists('.env') || copy('.env.example', '.env');\""
    ],
    "post-create-project-cmd": [
      "@php craft setup/welcome"
    ],
    "pre-update-cmd": "@pre-craft-update",
    "pre-install-cmd": "@pre-craft-update",
    "post-update-cmd": "@post-craft-update",
    "post-install-cmd": "@post-craft-update"
  }
}

So the craft-update script runs two oth­er scripts, pre-craft-update & post-craft-update. These auto­mat­i­cal­ly do the fol­low­ing when our con­tain­er starts up:

Start­ing from a clean slate like this is so help­ful in terms of avoid­ing sil­ly prob­lems like things being cached, not up to date, etc.

Link Service: php_xdebug

PHP is the lan­guage that the Yii2 frame­work and Craft CMS itself is based on, so we need it in order to run our app.

This ser­vice is com­posed of a base image that con­tains all of the pack­ages and PHP exten­sions we’ll always need to use, and then the project-spe­cif­ic image that con­tains what­ev­er addi­tion­al things are need­ed for our project.

FROM php:7.3-fpm

# Install packages
RUN apt-get update \
    && \
    # apt Debian packages
    apt-get install -y \
        apt-utils \
        autoconf \
        ca-certificates \
        curl \
        g++ \
        libbz2-dev \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libpng-dev \
        libpq-dev \
        libssl-dev \
        libicu-dev \
        libmagickwand-dev \
        libzip-dev \
        unzip \
        zip \
    && \
    # pecl PHP extensions
    pecl install \
        imagick-3.4.4 \
        redis \
        xdebug-2.8.1 \
    && \
    # Configure PHP extensions
    docker-php-ext-configure \
        gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        bcmath \
        bz2 \
        exif \
        ftp \
        gettext \
        gd \
        iconv \
        intl \
        mbstring \
        opcache \
        pdo \
        shmop \
        sockets \
        sysvmsg \
        sysvsem \
        sysvshm \
        zip \
    && \
    # Enable PHP extensions
    docker-php-ext-enable \
        imagick \
        redis \
        xdebug \
    # Clean apt repo caches that don't need to be part of the image
    && \
    apt-get clean \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Append our php.ini overrides to the end of the file
RUN echo "upload_max_filesize = 10M" > /usr/local/etc/php/php.ini && \
    echo "post_max_size = 10M" >> /usr/local/etc/php/php.ini && \
    echo "max_execution_time = 300" >> /usr/local/etc/php/php.ini && \
    echo "memory_limit = 256M" >> /usr/local/etc/php/php.ini && \
    echo "opcache.revalidate_freq = 0" >> /usr/local/etc/php/php.ini && \
    echo "opcache.validate_timestamps = 1" >> /usr/local/etc/php/php.ini

# Copy the `xdebug.ini` file into place for xdebug
COPY ./xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini

# Copy the `zzz-docker.conf` file into place for php-fpm
COPY ./zzz-docker.conf /usr/local/etc/php-fpm.d/zzz-docker.conf

We’ve based the con­tain­er on the php image, tagged at ver­sion 7.3

We’re then adding a bunch of Debian pack­ages that we want avail­able for our Ubun­tu oper­at­ing sys­tem base, some debug­ging tools, as well as some PHP exten­sions that Craft CMS requires.

Then we add some defaults to the php.ini, and copy into place the xdebug.ini file:

xdebug.default_enable=1
xdebug.remote_enable=1
xdebug.remote_port=9003
xdebug.remote_handler=dbgp
xdebug.remote_connect_back=1
xdebug.remote_host=host.docker.internal
xdebug.remote_autostart=1

…and copy into place the zzz-docker.conf file:

[www]
pm.max_children = 10
pm.process_idle_timeout = 10s
pm.max_requests = 1000

This just sets some defaults for php-fpm that make sense for local development.

By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it avail­able as nys­tu­dio107/php-dev-base on DockerHub.

Since it’s pre-built, we don’t have to build it every time, and can lay­er on top of this image any­thing project-spe­cif­ic via the php-dev-craft con­tain­er image:

FROM nystudio107/php-dev-base

# Install packages
RUN apt-get update \
    && \
    # apt Debian packages
    apt-get install -y \
        nano \
        jpegoptim \
        optipng \
        gifsicle \
        webp \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        pdo_mysql \
    && \
    # Install Composer
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer \
    # Clean apt repo caches that don't need to be part of the image
    && \
    apt-get clean \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

WORKDIR /var/www/project

# Create the storage directory and make it writeable by PHP
RUN mkdir -p /var/www/project/cms/storage && \
    mkdir -p /var/www/project/cms/storage/runtime && \
    chown -R www-data:www-data /var/www/project/cms/storage

# Create the cpresources directory and make it writeable by PHP
RUN mkdir -p /var/www/project/cms/web/cpresources && \
    chown -R www-data:www-data /var/www/project/cms/web/cpresources

WORKDIR /var/www/project/cms

This is the image that we actu­al build into a con­tain­er, and use for our project. We install the nano edi­tor because I find it handy some­times, and we also install pdo_mysql so that PHP can con­nect to our Mari­aDB database.

We do it this way so that if we want to cre­ate a Craft CMS project that uses Post­gres, we can just swap in the PDO exten­sion need­ed here.

Then we make sure the var­i­ous storage/ and cpresources/ direc­to­ries are in place, with the right own­er­ship so that Craft will run properly.

Then do a composer install every time the Dock­er con­tain­er is start­ed up. While this takes a lit­tle more time, it makes things a whole lot eas­i­er when work­ing with teams or on mul­ti­ple environments.

We have to do the composer install as part of the Dock­er image CMD because the file sys­tem mounts aren’t in place until the CMD is run.

This allows us to update our Com­pos­er depen­den­cies just by delet­ing the composer.lock file, and doing docker-compose up

Sim­ple.

The alter­na­tive is doing a docker exec -it craft_php_1 /bin/bash to open up a shell in our con­tain­er, and run­ning the com­mand man­u­al­ly. Which is fine, but a lit­tle con­vo­lut­ed for some.

Then we make sure that the own­er­ship on impor­tant direc­to­ries is cor­rect, and we run the craft-update Com­pos­er script:

{
  "require": {
    "craftcms/cms": "^3.4.0",
    "vlucas/phpdotenv": "^3.4.0",
    "yiisoft/yii2-redis": "^2.0.6",
    "nystudio107/craft-imageoptimize": "^1.0.0",
    "nystudio107/craft-fastcgicachebust": "^1.0.0",
    "nystudio107/craft-minify": "^1.2.5",
    "nystudio107/craft-typogrify": "^1.1.4",
    "nystudio107/craft-retour": "^3.0.0",
    "nystudio107/craft-seomatic": "^3.2.0",
    "nystudio107/craft-webperf": "^1.0.0",
    "nystudio107/craft-twigpack": "^1.1.0"
  },
  "autoload": {
    "psr-4": {
      "modules\\sitemodule\\": "modules/sitemodule/src/"
    }
  },
  "config": {
    "sort-packages": true,
    "optimize-autoloader": true,
    "platform": {
      "php": "7.0"
    }
  },
  "scripts": {
    "craft-update": [
      "@pre-craft-update",
      "@post-craft-update"
    ],
    "pre-craft-update": [
    ],
    "post-craft-update": [
      "@php craft install/check && php craft clear-caches/all || return 0",
      "@php craft install/check && php craft migrate/all || return 0",
      "@php craft install/check && php craft project-config/apply || return 0"
    ],
    "post-root-package-install": [
      "@php -r \"file_exists('.env') || copy('.env.example', '.env');\""
    ],
    "post-create-project-cmd": [
      "@php craft setup/welcome"
    ],
    "pre-update-cmd": "@pre-craft-update",
    "pre-install-cmd": "@pre-craft-update",
    "post-update-cmd": "@post-craft-update",
    "post-install-cmd": "@post-craft-update"
  }
}

So the craft-update script runs two oth­er scripts, pre-craft-update & post-craft-update. These auto­mat­i­cal­ly do the fol­low­ing when our con­tain­er starts up:

Start­ing from a clean slate like this is so help­ful in terms of avoid­ing sil­ly prob­lems like things being cached, not up to date, etc.

Link Service: Queue

The Queue ser­vice is an exact copy of the PHP ser­vice, and it makes lib­er­al uses of YAML Alias­es to use the same set­tings as the PHP service.

The only dif­fer­ence is the addi­tion of com­mand: ./craft queue/listen 10 which is what gets exe­cut­ed when the con­tain­er is spun up.

The pur­pose of the Queue ser­vice is sim­ply to run any back­ground queue jobs effi­cient­ly, as dis­cussed in the Robust queue job han­dling in Craft CMS runner.

We just need to set the config/general.php set­ting run­QueueAu­to­mat­i­cal­ly to false so that queue jobs are not run via web request anymore.

It uses this keep alive” script to ensure that the queue restarts should it exit for any reason:

#!/bin/bash

# Run Queue shell script
#
# This shell script runs the Craft CMS queue via `php craft queue/listen`
# It's wrapped in a "keep alive" infinite loop that restarts the command
# (after a 30 second sleep) should it exit unexpectedly for any reason
#
# @author    nystudio107
# @copyright Copyright (c) 2020 nystudio107
# @link      https://nystudio107.com/
# @license   MIT

while true
do
  cd /var/www/project/cms
  php craft queue/listen 10
  echo "-> craft queue/listen will retry in 30 seconds"
  sleep 30
done

Link Service: webpack

web­pack is the build tool that we use for build­ing the CSS, JavaScript, and oth­er parts of our application.

The set­up used here is entire­ly based on the An Anno­tat­ed web­pack 4 Con­fig for Fron­tend Web Devel­op­ment arti­cle, just with some set­tings tweaked.

That means our web­pack build process runs entire­ly inside of a Dock­er con­tain­er, but we still get all of the Hot Mod­ule Replace­ment good­ness for local development.

This ser­vice is com­posed of a base image that con­tains node itself, all of the Debian pack­ages need­ed for head­less Chrome, the npm pack­ages we’ll always need to use, and then the project-spe­cif­ic image that con­tains what­ev­er addi­tion­al things are need­ed for our project.

FROM node:11

# Install packages for headless chrome
# https://medium.com/@ssmak/how-to-fix-puppetteer-error-while-loading-shared-libraries-libx11-xcb-so-1-c1918b75acc3
RUN apt-get update \
    && \
    # apt Debian packages
    apt-get install -y \
        ca-certificates \
        fonts-liberation \
        gconf-service \
        libgl1-mesa-glx \
        libasound2 \
        libatk1.0-0 \
        libc6 \
        libcairo2 \
        libcups2 \
        libdbus-1-3 \
        libexpat1 \
        libfontconfig1 \
        libgcc1 \
        libgconf-2-4 \
        libgdk-pixbuf2.0-0 \
        libglib2.0-0 \
        libgtk-3-0 \
        libnspr4 \
        libpango-1.0-0 \
        libpangocairo-1.0-0 \
        libstdc++6 \
        libx11-6 \
        libx11-xcb1 \
        libxcb1 \
        libxcomposite1 \
        libxcursor1 \
        libxdamage1 \
        libxext6 \
        libxfixes3 \
        libxi6 \
        libxrandr2 \
        libxrender1 \
        libxss1 \
        libxtst6 \
        libappindicator1 \
        libnss3 \
        lsb-release \
        wget \
        xdg-utils \
    # Clean apt repo caches that don't need to be part of the image
    && \
    apt-get clean \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

We’ve based the con­tain­er on the node image, tagged at ver­sion 11

We’re then adding a bunch of Debian pack­ages that we need in order to get head­less Chrome work­ing (need­ed for Crit­i­cal CSS gen­er­a­tion), as well as oth­er libraries for the Sharp image library to work effectively.

By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it avail­able as nys­tu­dio107/n­ode-dev-base on DockerHub.

Since it’s pre-built, we don’t have to build it every time, and can lay­er on top of this image any­thing project-spe­cif­ic via the node-dev-craft con­tain­er image:

FROM nystudio107/node-dev-base

WORKDIR /var/www/project/buildchain/

# Run our webpack build in debug mode

# We'd normally use `npm ci` here, but by using `install`:
# - If `package-lock.json` is present, it will install what is in the lock file
# - If `package-lock.json` is missing, it will update to the latest dependencies
#   and create the `package-lock-json` file
# This automatic running adds to the startup overhead of `docker-compose up`
# but saves far more time in not having to deal with out of sync versions
# when working with teams or multiple environments
CMD npm install \
    && \
    npm run debug

Then, just like with the php-dev-craft image, this does a npm install every time the Dock­er con­tain­er is cre­at­ed. While this adds some time, it saves far more in keep­ing every­one on the team or in mul­ti­ple envi­ron­ments in sync.

We have to do the npm install as part of the Dock­er image CMD because the file sys­tem mounts aren’t in place until the CMD is run.

This allows us to update our npm depen­den­cies just by delet­ing the package-lock.json file, and doing docker-compose up

The alter­na­tive is doing a docker exec -it craft_webpack_1 /bin/bash to open up a shell in our con­tain­er, and run­ning the com­mand manually.

Link All Aboard!

Hope­ful­ly this anno­tat­ed Dock­er con­fig has been use­ful to you. If you use Craft CMS, you can dive in and start using it your­self; if you use some­thing else entire­ly, the con­cepts here should still be very salient for your project.

Docker containers local development

I think that Dock­er — or some oth­er con­cep­tu­al­ly sim­i­lar con­tainer­iza­tion strat­e­gy — is going to be an impor­tant tech­nol­o­gy going for­ward. So it’s time to jump on board.

As men­tioned ear­li­er, the Dock­er con­fig used here is used in both the dev​Mode​.fm GitHub repo, and in the nystudio107/​craft boil­er­plate Com­pos­er project if you want to see some in the wild” examples.

Hap­py containerizing!