Andrew Welch
Published , updated · 5 min read · RSS Feed
Please consider 🎗 sponsoring me 🎗 to keep writing articles like this.
An Annotated Docker Config for Frontend Web Development
A local development environment with Docker allows you to shrink-wrap the devops your project needs as config, making onboarding frictionless
Docker is a tool for containerizing your applications, which means that your application is shrink-wrapped with the environment that it needs to run.
This allows you to define the devops your application needs in order to run as config, which can then be easily replicated and reused.
The principals and approach discussed in this article are universal.
While there are many uses for Docker, this article will focus on using Docker as a local environment for frontend web development.
Although Craft CMS is referenced in this article, Docker works well for any kind of web development with any kind of CMS or dev stack (Laravel, NodeJS, Rails, whatevs).
The Docker config used here is used in both the devMode.fm GitHub repo, and in the nystudio107/craft boilerplate Composer project if you want to see some “in the wild” examples.
The Docker config on its own can be found at nystudio107/docker-images, and the pre-built base images are up on DockerHub.
The base images are all multi-arch, so you folks using Apple Silicon M1 processors will get native images, too.
Link Why Docker?
If you’re doing frontend web development, you very likely already have some kind of a local development environment.
So why should you use Docker instead?
This is a very reasonable question to ask, because any kind of switch of tooling requires some upskilling, and some work.
I’ve long been using Homestead—which is really just a custom Vagrant box with some extras — as my local dev environment as discussed in the Local Development with Vagrant / Homestead article.
I’d chosen to use Homestead because I wanted a local dev environment that was deterministic, disposable, and separated my development environment from my actual computer.
Docker has all of these advantages, but also a much more lightweight approach. Here are the advantages of Docker for me:
- Each application has exactly the environment it needs to run, including specific versions of any of the plumbing needed to get it to work (PHP, MySQL, Postgres, whatever)
- Onboarding others becomes trivial, all they need to do is install Docker and type docker-compose up and away they go
- Your development environment is entirely disposable; if something goes wrong, you just delete it and fire up a new one
- Your local computer is separate from your development environment, so switching computers is trivial, and you won’t run into issues where you hose your computer or are stuck with conflicting versions of devops services
- The cost of trying different versions of various services is low; just change a number in a .yaml file, docker-compose up, and away you go
There are other advantages as well, but these are the more important ones for me.
Additionally, containerizing your application in local development is a great first step to using a containerized deployment process, and running Docker in production as well.
A disadvantage with any kind of virtualization is performance, but that can be mitigated by having modern hardware, a bunch of memory, and optimizing Docker via the Performance Tuning Docker for Mac article.
Link Understanding Docker
This article is not a comprehensive tutorial on Docker, but I will attempt to explain some of the more important, broader concepts.
Docker has the notion of containers, each of which run one or more services. You can think of each container as a mini virtual machine (even though technically, they are not).
While you can run multiple services in a single Docker container, separating each service out into a separate container has many advantages.
If PHP, Apache, and MySQL are all in separate containers, they won’t affect each other, and also can be more easily swapped in and out.
If you decide you want to use Nginx or Postgres instead, the decoupling into separate containers makes it easy!
Docker containers are built from Docker images, which can be thought of as a recipe for building the container, with all of the files and code needed to make it happen.
If a Docker image is the recipe, a Docker container is the finished resulting meal.
This layering works thanks to the Union file system, which handles composing all the layers of the cake together for you.
We said earlier that Docker is more lightweight than running a full Vagrant VM, and it is… but unfortunately, unless you’re running Linux there still is a virtualization layer running, which is HyperKit for the Mac, and Hyper‑V for Windows.
Docker for Mac & Windows still has a virtualization layer, it’s just relatively lightweight.
Fortunately, you don’t need to be concerned with any of this, but the performance implications do inform some of the decisions we’ve made in the Docker config presented here.
For more information on Docker, for that I’d highly recommend the Docker Mastery course (if it’s not on sale now, don’t worry, it will be at some point) and also the following devMode.fm episodes:
…and there are tons of other excellent educational resources on Docker out there such as Matt Gray’s Craft in Docker: Everything I’ve Learnt presentation, and his excellent A Craft CMS Development Workflow With Docker series.
In our article, we will focus on annotating a real-world Docker config that’s used in production. We’ll discuss various Docker concepts as we go, but the primary goal here is documenting a working config.
This article is what I wished existed when I started learning Docker
I learn best by looking at a working example, and picking it apart. If you do, too, let’s get going!
Link xdebug performance
Before we delve into the Docker setup, a quick discussion on xdebug is in order.
Xdebug is a tool that allows you to debug your PHP code by setting breakpoints, inspecting variables, profiling code, and so on. It’s vital when you need it, but it also slows things down when you don’t.
xdebug is crucial for PHP development, but it’s also slow
Most of the time we don’t need xdebug, but the overhead of merely having xdebug installed can slow down frontend requests. There are ways you can disable xdebug via environment variable (and other methods), but they usually require rebuilding your container.
Additionally, just having xdebug installed adds overhead. I was researching this conundrum (whilst also re-evaluating my life) when I discovered the article Developing at Full Speed with Xdebug.
Essentially what we do is just have two PHP containers running, one that’s our development environment with xdebug installed, the other that is our production environment without our debugging tools.
What happens is a request comes in, Nginx looks to see if there’s an XDEBUG_SESSION or XDEBUG_PROFILE cookie set. If there’s no cookie, it routes the request to the regular php container.
If however the XDEBUG_SESSION or XDEBUG_PROFILE cookie is set (with any value), it routes the request to the php_xdebug container.
You can set this cookie with a browser extension, your IDE, or via a number of other methods. Here is the Xdebug Helper browser extension for your favorite browsers: Chrome — Firefox — Safari
Here’s a video of it in action:
This elegant solution allows us to develop at full speed using Docker & PHP, while also having xdebug instantly available if we need it.
🔥
Link Alpine Linux
When I originally created my Docker setup, I used the default Ubuntu images, because I was familiar with Ubuntu, and I’d been told that I’d have fewer issues getting things up and running.
This was all true, but I decided to go in and refactor all of my images to be based off of Alpine Linux, which is a version of Linux that stresses small image sizes, and efficiency. Here’s what it looks like converted over:
N.B.: the sizes above refer only to the space on disk, the images don’t use up this much memory when in use. On my laptop, I have 2gb allocated to Docker, and the memory usage looks something like this:
Having smaller Docker images means that they take less time to download, they take up less disk space, and are in general more efficient.
And they are more in line with the Docker “bring only what you need” philosophy.
Link My Docker Directory Structure
This Docker setup uses a directory structure that looks like this (don’t worry, it’s not as complex as it seems, many of the Docker images here are for reference only, and are actually pre-built):
├── buddy.yml
├── buildchain
│ ├── package.json
│ ├── package-lock.json
│ ├── postcss.config.js
│ ├── tailwind.config.js
│ ├── webpack.common.js
│ ├── webpack.dev.js
│ ├── webpack.prod.js
│ └── webpack.settings.js
├── CHANGELOG.md
├── cms
│ ├── composer.json
│ ├── composer.lock
│ ├── config
│ ├── craft
│ ├── craft.bat
│ ├── example.env
│ ├── modules
│ ├── storage
│ ├── templates
│ ├── vendor
│ └── web
├── db-seed
│ ├── db_seed.sql
├── docker-compose.yml
├── docker-config
│ ├── mariadb
│ │ └── Dockerfile
│ ├── nginx
│ │ ├── default.conf
│ │ └── Dockerfile
│ ├── node-dev-base
│ │ └── Dockerfile
│ ├── node-dev-webpack
│ │ └── Dockerfile
│ ├── php-dev-base
│ │ ├── Dockerfile
│ │ ├── xdebug.ini
│ │ └── zzz-docker.conf
│ ├── php-dev-craft
│ │ └── Dockerfile
│ ├── php-prod-base
│ │ ├── Dockerfile
│ │ └── zzz-docker.conf
│ ├── php-prod-craft
│ │ ├── Dockerfile
│ │ └── run_queue.sh
│ ├── postgres
│ │ └── Dockerfile
│ └── redis
│ └── Dockerfile
├── migrations
├── scripts
│ ├── common
│ ├── docker_prod_build.sh
│ ├── docker_pull_db.sh
│ ├── docker_restore_db.sh
│ └── example.env.sh
├── src
│ ├── conf
│ ├── css
│ ├── img
│ ├── js
│ ├── php
│ ├── templates -> ../cms/templates
│ └── vue
└── tsconfig.json
Here’s an explanation of what the top-level directories are:
- cms — everything needed to run Craft CMS. The is the “app” of the project
- docker-config — an individual directory for each service that the Docker setup uses, with a Dockerfile and other ancillary config files therein
- scripts — helper shell scripts that do things like pull a remote or local database into the running Docker container. These are derived from the Craft-Scripts shell scripts
- src — the frontend JavaScript, CSS, Vue, etc. source code that the project uses
Each service is referenced in the docker-compose.yaml file, and defined in the Dockerfile that is in the corresponding directory in the docker-config/ directory.
It isn’t strictly necessary to have a separate Dockerfile for each service, if they are just derived from a base image. But I like the consistency, and ease of future expansion should something custom be necessary down the road.
You’ll also notice that there are php-dev-base and php-dev-craft directories, as well as node-dev-base and node-dev-webpack directories, and might be wondering why they aren’t consolidated.
The reason is that there’s a whole lot of the base setup in both that just never changes, so instead of rebuilding that each time, we can build it once and publish the images up on DockerHub.com as nystudio107/php-dev-base and nystudio107/node-dev-base.
Then we can layer anything specific about our project on top of these base images in the respective -craft services. This saves us significant building time, while keeping flexibility.
Link The docker-compose.yaml file
While a docker-compose.yaml file isn’t required when using Docker, from a practical point of view, you’ll almost always use it. The docker-compose.yaml file allows you to define multiple containers for running the services you need, and coordinate starting them up and shutting them down in unison.
Then all you need to do is run docker-compose up via the terminal in a directory that has a docker-compose.yaml file, and Docker will start up all of your containers for you!
Here’s an example of what that might look like, starting up your Docker containers:
Let’s have a look at our docker-compose.yaml file:
version: '3.7'
services:
# nginx - web server
nginx:
build:
context: ./docker-config/nginx
dockerfile: ./Dockerfile
env_file: &env
- ./cms/.env
init: true
ports:
- "8000:80"
volumes:
- cpresources:/var/www/project/cms/web/cpresources:delegated
- ./cms/web:/var/www/project/cms/web:cached
# php - run php-fpm
php:
build: &php-build
context: ./docker-config/php-prod-craft
dockerfile: ./Dockerfile
depends_on:
- "mariadb"
- "redis"
env_file:
*env
expose:
- "9000"
init: true
volumes: &php-volumes
- cpresources:/var/www/project/cms/web/cpresources:delegated
- storage:/var/www/project/cms/storage:delegated
- ./cms:/var/www/project/cms:cached
# Specific directories that need to be bind-mounted
- ./cms/storage/logs:/var/www/project/cms/storage/logs:delegated
- ./cms/storage/runtime/compiled_templates:/var/www/project/cms/storage/runtime/compiled_templates:delegated
- ./cms/storage/runtime/compiled_classes:/var/www/project/cms/storage/runtime/compiled_classes:delegated
- ./cms/vendor:/var/www/project/cms/vendor:delegated
# php - run php-fpm with xdebug
php_xdebug:
build:
context: ./docker-config/php-dev-craft
dockerfile: ./Dockerfile
depends_on:
- "php"
env_file:
*env
expose:
- "9000"
init: true
volumes:
*php-volumes
# queue - runs queue jobs via php craft queue/listen
queue:
build:
*php-build
command: /var/www/project/run_queue.sh
depends_on:
- "php"
env_file:
*env
init: true
volumes:
*php-volumes
# mariadb - database
mariadb:
build:
context: ./docker-config/mariadb
dockerfile: ./Dockerfile
env_file:
*env
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
init: true
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql
- ./db-seed/:/docker-entrypoint-initdb.d
# redis - key/value database for caching & php sessions
redis:
build:
context: ./docker-config/redis
dockerfile: ./Dockerfile
expose:
- "6379"
init: true
# vite - frontend build system
vite:
build:
context: ./docker-config/node-dev-vite
dockerfile: ./Dockerfile
env_file:
*env
init: true
ports:
- "3000:3000"
volumes:
- ./buildchain:/var/www/project/buildchain:cached
- ./buildchain/node_modules:/var/www/project/buildchain/node_modules:delegated
- ./cms/web:/var/www/project/cms/web:delegated
- ./src:/var/www/project/src:cached
- ./cms/templates:/var/www/project/cms/templates:cached
volumes:
db-data:
cpresources:
storage:
This .yaml file has 3 top-level keys:
- version — the version number of the Docker Compose file, which corresponds to different capabilities offered by different versions of the Docker Engine
- services — each service corresponds to a separate Docker container that is created using a separate Docker image
- volumes — named volumes that are mounted and can be shared amongst your Docker containers (but not your host computer), for storing persistent data
We’ll detail each service below, but there are a few interesting tidbits to cover first.
BUILD
When you’re creating a Docker container, you can either base it on an existing image (either a local image or one pulled down from DockerHub.com), or you can build it locally via a Dockerfile.
As mentioned above, I chose the methodology that each service would be creating as a build from a Dockerfile (all of which extend FROM an image up on DockerHub.com) to keep things consistent.
This means that some of our Dockerfiles we use are nothing more than a single line, e.g.: FROM mariadb:10.3, but this setup does allow for expansion.
The two keys used for build are:
- context — this specifies where the working directory for the build should be, relative to the docker-compose.yaml file. This is set to the root directory of each service
- dockerfile — this specifies a path to the Dockerfile to use to build the service Docker container. Think of the Dockerfile as a local Docker image
So the context is always the root directory of each service, with the Dockerfile and any supporting files for each service are off in a separate directory. We do it this way so that we’re not passing down more than is needed when building the Docker images, which slows down the build process significantly (thanks to Mizux Seiha & Patrick Harrington for pointing this out!).
DEPENDS_ON
This just lets you specify what other services this particular service depends on; this allows you to ensure that other containers are up and running before this container starts up.
ENV_FILE
The env_file setting specifies a path to your .env file for key/value pairs that will be injected into a Docker container.
Docker does not allow for quotes in its .env file, which is contrary to how .env files work almost everywhere else… so remove any quotes you have in your .env file.
You’ll notice that for the nginx service, there’s a strange &env value in the env_file setting, and for the other services, the setting is *env. This is taking advantage of YAML aliases, so if we do change the .env file path, we only have to do it in one place.
Doing it this way also ensures that all of the .env environment variables are available in every container. For more on environment variables, check out the Flat Multi-Environment Config for Craft CMS 3 article.
Because it’s Docker that is injecting these .env environment variables, if you change your .env file, you’ll need to restart your Docker containers.
INIT
Setting init: true for an image causes signals to be forwarded to the process, which allows them to terminate quickly when you halt them with Control-C.
PORTS
This specifies the port that should be exposed outside of the container, followed by the port that the container uses internally. So for example, the nginx service has "8000:80", which means the externally accessible port for the Nginx webserver is 8000, and the internal port the service runs on is 80.
If this sounds confusing, understand that Docker uses its own internal network to allow containers to talk to each other, as well as the outside world.
VOLUMES
Docker containers run in their own little world, which is great for isolation purposes, but at some point you do need to share things from your “host” computer with the Docker container.
Docker volumes allow you to do this. You specify either a named volume or a path on your host, followed by the path where this volume should be bind mounted in the Docker container.
This is where performance problems can happen with Docker on the Mac and Windows. So we use some hints to help with performance:
- consistent — perfect consistency (host and container have an identical view of the mount at all times)
- cached — the host’s view is authoritative (permit delays before updates on the host appear in the container)
- delegated — the container’s view is authoritative (permit delays before updates on the container appear in the host)
So for things like node_modules/ and vendor/ we mark them as :delegated because while we want them shared, the container is in control of modifying these volumes.
Some Docker setups I’ve seen put these directories into a named volume, which means they are visible only to the Docker containers.
But the problem is we lose out on our editor auto-completion, because our editor has nothing to index.
This is a non-negotiable for me
See the Auto-Complete Craft CMS 3 APIs in Twig with PhpStorm article for details.
FROM nginx:1.19-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
We’ve based the container on the nginx image, tagged at version 1.19
The only modification it makes is COPYing our default.conf file into place:
# default Docker DNS server
resolver 127.0.0.11;
# If a cookie doesn't exist, it evaluates to an empty string, so if neither cookie exists, it'll match :
# (empty string on either side of the :), but if either or both cookies are set, it won't match, and will hit the default rule
map $cookie_XDEBUG_SESSION:$cookie_XDEBUG_PROFILE $my_fastcgi_pass {
default php_xdebug;
':' php;
}
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/project/cms/web;
index index.html index.htm index.php;
charset utf-8;
gzip_static on;
ssi on;
client_max_body_size 0;
error_page 404 /index.php?$query_string;
access_log off;
error_log /dev/stdout info;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
try_files $uri/index.html $uri $uri/ /index.php?$query_string;
}
location ~ [^/]\.php(/|$) {
try_files $uri $uri/ /index.php?$query_string;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass $my_fastcgi_pass:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param HTTP_PROXY "";
add_header Last-Modified $date_gmt;
add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
if_modified_since off;
expires off;
etag off;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
}
This is just a simple Nginx config that works well with Craft CMS. You can find more about Nginx configs for Craft CMS in the nginx-craft GitHub repo.
The only real “magic” here is our map directive:
# If a cookie doesn't exist, it evaluates to an empty string, so if neither cookie exists, it'll match :
# (empty string on either side of the :), but if either or both cookies are set, it won't match, and will hit the default rule
map $cookie_XDEBUG_SESSION:$cookie_XDEBUG_PROFILE $my_fastcgi_pass {
default php_xdebug;
':' php;
}
This just sets the $my_fastcgi_pass variable to php if there is no XDEBUG_SESSION or XDEBUG_PROFILE cookie set, otherwise it sets it to php_xdebug
We use this variable later on in the config file:
fastcgi_pass $my_fastcgi_pass:9000;
This is what allows the routing of debug requests to the right container, for performance reasons.
FROM yobasystems/alpine-mariadb:10.4.15
We’ve based the container on the mariadb image, tagged at version 10.4.15
There’s no modification at all to the source image.
When the container is started for the first time, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d so we can use this to seed the initial database. See Initializing a fresh instance.
Link Service: Postgres
Postgres is a robust database that I am using more and more for Craft CMS projects. It’s not used in the docker-compose.yaml presented here, but I keep the configuration around in case I want to use it.
Postgres is used in local dev and in production on the devMode.fm GitHub repo, if you want to see it implemented.
FROM postgres:12.2
We’ve based the container on the postgres image, tagged at version 12.2
There’s no modification at all to the source image.
When the container is started for the first time, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d so we can use this to seed the initial database. See Initialization scripts.
Link Service: Redis
Redis is a key/value pair database that I set all of my Craft CMS installs to use both as a caching method, and as a session handler for PHP.
FROM redis:5-alpine
We’ve based the container on the redis image, tagged at version 5
There’s no modification at all to the source image.
Link Service: php
PHP is the language that the Yii2 framework and Craft CMS itself is based on, so we need it in order to run our app.
This is the PHP container that is used for regular web requests, so it does not include xdebug for performance reasons.
This service is composed of a base image that contains all of the packages and PHP extensions we’ll always need to use, and then the project-specific image that contains whatever additional things are needed for our project.
FROM php:8.0-fpm-alpine
# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
autoconf \
dpkg-dev \
dpkg \
file \
g++ \
gcc \
libc-dev \
make \
pkgconf \
re2c \
wget
# Install packages
RUN set -eux; \
# Packages needed only for build
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
&& \
# Packages to install
apk add --no-cache \
bzip2-dev \
ca-certificates \
curl \
fcgi \
freetype-dev \
gettext-dev \
icu-dev \
imagemagick \
imagemagick-dev \
libjpeg-turbo-dev \
libmcrypt-dev \
libpng \
libpng-dev \
libressl-dev \
libtool \
libxml2-dev \
libzip-dev \
oniguruma-dev \
unzip \
&& \
# pecl PHP extensions
pecl install \
imagick-3.4.4 \
redis \
&& \
# Configure PHP extensions
docker-php-ext-configure \
gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ \
&& \
# Install PHP extensions
docker-php-ext-install \
bcmath \
bz2 \
exif \
ftp \
gettext \
gd \
iconv \
intl \
mbstring \
opcache \
pdo \
shmop \
sockets \
sysvmsg \
sysvsem \
sysvshm \
zip \
&& \
# Enable PHP extensions
docker-php-ext-enable \
imagick \
redis \
&& \
# Remove the build deps
apk del .build-deps \
&& \
# Clean out directories that don't need to be part of the image
rm -rf /tmp/* /var/tmp/*
# https://github.com/docker-library/php/issues/240
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/community/ gnu-libiconv
ENV LD_PRELOAD /usr/lib/preloadable_libiconv.so php
# Copy the `zzz-docker-php.ini` file into place for php
COPY zzz-docker-php.ini /usr/local/etc/php/conf.d/
# Copy the `zzz-docker-php-fpm.conf` file into place for php-fpm
COPY zzz-docker-php-fpm.conf /usr/local/etc/php-fpm.d/
We’ve based the container on the php image, tagged at version 8.0
We’re then adding a bunch of packages that we want available for our Alpine operating system base, some debugging tools, as well as some PHP extensions that Craft CMS requires.
Then we copy into place the zzz-docker-php.conf file:
[www]
pm.max_children = 10
pm.process_idle_timeout = 30s
pm.max_requests = 1000
This just sets some defaults for php-fpm that make sense for local development.
Then we copy into place the zzz-docker-php-fpm.ini file with some sane defaults for local Craft CMS development:
[php]
memory_limit=256M
max_execution_time=300
max_input_time=300
max_input_vars=5000
upload_max_filesize=100M
post_max_size=100M
[opcache]
opcache.enable=1
opcache.revalidate_freq=0
opcache.validate_timestamps=1
By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it available as nystudio107/php-prod-base on DockerHub.
Since it’s pre-built, we don’t have to build it every time, and can layer on top of this image anything project-specific via the php-prod-craft container image:
FROM nystudio107/php-prod-base:8.0-alpine
# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
autoconf \
dpkg-dev \
dpkg \
file \
g++ \
gcc \
libc-dev \
make \
pkgconf \
re2c \
wget
# Install packages
RUN set -eux; \
# Packages needed only for build
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
&& \
# Packages to install
apk add --no-cache \
su-exec \
gifsicle \
jpegoptim \
libwebp-tools \
nano \
optipng \
mysql-client \
&& \
# Install PHP extensions
docker-php-ext-install \
pdo_mysql \
&& \
# Install Composer
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer \
&& \
# Remove the build deps
apk del .build-deps \
&& \
# Clean out directories that don't need to be part of the image
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /var/www/project
COPY ./run_queue.sh .
RUN chmod a+x run_queue.sh \
&& \
mkdir -p /var/www/project/cms/storage \
&& \
mkdir -p /var/www/project/cms/web/cpresources \
&& \
chown -R www-data:www-data /var/www/project
COPY ./composer_install.sh .
RUN chmod a+x composer_install.sh
# Run the composer_install.sh script that will do a `composer install`:
# - If `composer.lock` is missing
# - If `vendor/` is missing
# ...then start up php-fpm. The `run_queue.sh` script in the queue container
# will take care of running any pending migrations and apply any Project Config changes,
# as well as set permissions via an async CLI process
CMD ./composer_install.sh \
&& \
php-fpm
This is the image that we actual build into a container, and use for our project. We install the nano editor because I find it handy sometimes, and we also install pdo_mysql so that PHP can connect to our MariaDB database.
We do it this way so that if we want to create a Craft CMS project that uses Postgres, we can just swap in the PDO extension needed here.
Then we make sure the various storage/ and cpresources/ directories are in place, with the right ownership so that Craft will run properly.
Then do a composer install every time the Docker container is started up. While this takes a little more time, it makes things a whole lot easier when working with teams or on multiple environments.
We have to do the composer install as part of the Docker image CMD because the file system mounts aren’t in place until the CMD is run.
This allows us to update our Composer dependencies just by deleting the composer.lock file, and doing docker-compose up
Simple.
The alternative is doing a docker exec -it craft_php_1 /bin/bash to open up a shell in our container, and running the command manually. Which is fine, but a little convoluted for some.
This container runs the shell script composer_install.sh when it starts up, to do a composer install if composer.lock or vendor/ is not present:
#!/bin/bash
# Composer Install shell script
#
# This shell script runs `composer install` if either the `composer.lock` file or
# the `vendor/` directory is not present`
#
# @author nystudio107
# @copyright Copyright (c) 2022 nystudio107
# @link https://nystudio107.com/
# @license MIT
# Ensure permissions on directories Craft needs to write to
chown -R www-data:www-data /var/www/project/cms/storage
chown -R www-data:www-data /var/www/project/cms/web/cpresources
# Check for `composer.lock` & `vendor/`
cd /var/www/project/cms
if [ ! -f "composer.lock" ] || [ ! -d "vendor" ]; then
su-exec www-data composer install --verbose --no-progress --no-scripts --optimize-autoloader --no-interaction
# Wait until the MySQL db container responds
echo "### Waiting for MySQL database"
until eval "mysql -h mysql -u $DB_USER -p$DB_PASSWORD $DB_DATABASE -e 'select 1' > /dev/null 2>&1"
do
sleep 1
done
# Run any pending migrations/project config changes
su-exec www-data composer craft-update
fi
The composer.json scripts:
{
"require": {
"craftcms/cms": "^3.4.0",
"vlucas/phpdotenv": "^3.4.0",
"yiisoft/yii2-redis": "^2.0.6",
"nystudio107/craft-autocomplete": "^1.0.0",
"nystudio107/craft-imageoptimize": "^1.0.0",
"nystudio107/craft-fastcgicachebust": "^1.0.0",
"nystudio107/craft-minify": "^1.2.5",
"nystudio107/craft-typogrify": "^1.1.4",
"nystudio107/craft-retour": "^3.0.0",
"nystudio107/craft-seomatic": "^3.2.0",
"nystudio107/craft-webperf": "^1.0.0",
"nystudio107/craft-twigpack": "^1.1.0"
},
"autoload": {
"psr-4": {
"modules\\sitemodule\\": "modules/sitemodule/src/"
}
},
"config": {
"sort-packages": true,
"optimize-autoloader": true,
},
"scripts": {
"craft-update": [
"@pre-craft-update",
"@post-craft-update"
],
"pre-craft-update": [
],
"post-craft-update": [
"@php craft install/check && php craft clear-caches/all --interactive=0 || exit 0",
"@php craft install/check && php craft migrate/all --interactive=0 || exit 0",
"@php craft install/check && php craft project-config/apply --interactive=0 || exit 0"
],
"post-root-package-install": [
"@php -r \"file_exists('.env') || copy('.env.example', '.env');\""
],
"post-create-project-cmd": [
"@php craft setup/welcome"
],
"pre-update-cmd": "@pre-craft-update",
"pre-install-cmd": "@pre-craft-update",
"post-update-cmd": "@post-craft-update",
"post-install-cmd": "@post-craft-update"
}
}
So the craft-update script runs two other scripts, pre-craft-update & post-craft-update. These automatically do the following when our container starts up:
- All migrations are run
- Project Config is sync’d
- All caches are cleared
Starting from a clean slate like this is so helpful in terms of avoiding silly problems like things being cached, not up to date, etc.
Link Service: php_xdebug
The php_xdebug container is very similar to the previous php container, but it includes xdebug so that we can do serious PHP debugging when we need it.
Requests get routed by Nginx to this container automatically if the XDEBUG_SESSION cookie is set.
FROM nystudio107/php-prod-base:8.0-alpine
# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
autoconf \
dpkg-dev \
dpkg \
file \
g++ \
gcc \
libc-dev \
make \
pkgconf \
re2c \
wget
# Install packages
RUN set -eux; \
# Packages needed only for build
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
&& \
# pecl PHP extensions
pecl install \
xdebug-3.0.2 \
&& \
# Enable PHP extensions
docker-php-ext-enable \
xdebug \
&& \
# Remove the build deps
apk del .build-deps \
&& \
# Clean out directories that don't need to be part of the image
rm -rf /tmp/* /var/tmp/*
# Copy the `xdebug.ini` file into place for xdebug
COPY ./xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini
# Copy the `zzz-docker-php.ini` file into place for php
COPY zzz-docker-php.ini /usr/local/etc/php/conf.d/
# Copy the `zzz-docker-php-fpm.conf` file into place for php-fpm
COPY zzz-docker-php-fpm.conf /usr/local/etc/php-fpm.d/
We’ve based the image on the nystudio107/php-prod-base image described above, tagged at version 8.0
We’re then adding in the xdebug 3 PHP extension, and copying into place the xdebug.ini file:
xdebug.mode=debug
xdebug.start_with_request=yes
xdebug.client_host=host.docker.internal
…and copy into place the zzz-docker.conf file:
[www]
pm.max_children = 10
pm.process_idle_timeout = 10s
pm.max_requests = 1000
This just sets some defaults for php-fpm that make sense for local development.
By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it available as nystudio107/php-dev-base on DockerHub.
Since it’s pre-built, we don’t have to build it every time, and can layer on top of this image anything project-specific via the php-dev-craft container image:
FROM nystudio107/php-dev-base:8.0-alpine
# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
autoconf \
dpkg-dev \
dpkg \
file \
g++ \
gcc \
libc-dev \
make \
pkgconf \
re2c \
wget
# Install packages
RUN set -eux; \
# Packages needed only for build
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
&& \
# Packages to install
apk add --no-cache \
su-exec \
gifsicle \
jpegoptim \
libwebp-tools \
nano \
optipng \
mysql-client \
&& \
# Install PHP extensions
docker-php-ext-install \
pdo_mysql \
&& \
# Install Composer
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer \
&& \
# Remove the build deps
apk del .build-deps \
&& \
# Clean out directories that don't need to be part of the image
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /var/www/project
RUN mkdir -p /var/www/project/cms/storage \
&& \
mkdir -p /var/www/project/cms/web/cpresources \
&& \
chown -R www-data:www-data /var/www/project
WORKDIR /var/www/project/cms
# Start php-fpm
CMD php-fpm
This is all analogous to what we do for the regular php container described earlier, except that we don’t go through the steps of installing the latest Composer packages via composer install, because our regular php container takes care of this for us.
This container exists purely to field the rare requests in which we need xdebug to debug or profile our code. When we do, we just need to configure our debugger to use port 9003 and away we go!
The majority of the time, this container site idle doing nothing, but it’s available when we need it for debugging purposes.
Link Service: Queue
The Queue service is an exact copy of the PHP service, and it makes liberal uses of YAML Aliases to use the same settings as the PHP service.
The only difference is the addition of command: ./craft queue/listen 10 which is what gets executed when the container is spun up.
The purpose of the Queue service is simply to run any background queue jobs efficiently, as discussed in the Robust queue job handling in Craft CMS runner.
We just need to set the config/general.php setting runQueueAutomatically to false so that queue jobs are not run via web request anymore.
It waits until the MySQL container is up and running, and also that the composer install has finished before starting up the queue listener:
#!/bin/bash
# Run Queue shell script
#
# This shell script runs the Craft CMS queue via `php craft queue/listen`
# It waits until the database container responds, then runs any pending
# migrations / project config changes via the `craft-update` Composer script,
# then runs the queue listener that listens for and runs pending queue jobs
#
# @author nystudio107
# @copyright Copyright (c) 2022 nystudio107
# @link https://nystudio107.com/
# @license MIT
cd /var/www/project/cms
# Wait until the MySQL db container responds
echo "### Waiting for MySQL database"
until eval "mysql -h mysql -u $DB_USER -p$DB_PASSWORD $DB_DATABASE -e 'select 1' > /dev/null 2>&1"
do
sleep 1
done
# Wait until the `composer install` is done by looking for the `vendor/autoload.php` file
echo "### Waiting for vendor/autoload.php"
while [ ! -f vendor/autoload.php ]
do
sleep 1
done
# Ensure permissions on directories Craft needs to write to
chown -R www-data:www-data /var/www/project/cms/storage
chown -R www-data:www-data /var/www/project/cms/web/cpresources
# Run any pending migrations/project config changes
su-exec www-data composer craft-update
# Run a queue listener
su-exec www-data php craft queue/listen 10
Link Service: webpack
webpack is the build tool that we use for building the CSS, JavaScript, and other parts of our application.
The setup used here is entirely based on the An Annotated webpack 4 Config for Frontend Web Development article, just with some settings tweaked.
That means our webpack build process runs entirely inside of a Docker container, but we still get all of the Hot Module Replacement goodness for local development.
This service is composed of a base image that contains node itself, all of the Debian packages needed for headless Chrome, the npm packages we’ll always need to use, and then the project-specific image that contains whatever additional things are needed for our project.
FROM node:16-alpine
# Install packages for headless chrome
RUN apk update \
&& \
apk add --no-cache nmap \
&& \
echo @edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories \
&& \
echo @edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories \
&& \
apk update \
&& \
apk add --no-cache \
# Packages needed for npm install of mozjpeg & cwebp, can't --virtual and apk del later
# Pre-builts do not work on alpine for either:
# ref: https://github.com/imagemin/imagemin/issues/168
# ref: https://github.com/imagemin/cwebp-bin/issues/27
autoconf \
automake \
build-base \
g++ \
gcc \
glu \
libc6-compat \
libtool \
libpng-dev \
libxxf86vm \
make \
nasm \
# Misc packages
nano \
# Image optimization packages
gifsicle \
jpegoptim \
libpng-dev \
libwebp-tools \
libjpeg-turbo-dev \
libjpeg-turbo-utils \
optipng \
pngquant \
# Headless Chrome packages
chromium \
harfbuzz \
"freetype>2.8" \
ttf-freefont \
nss
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
ENV CHROME_BIN /usr/bin/chromium-browser
ENV LIGHTHOUSE_CHROMIUM_PATH /usr/bin/chromium-browser
We’ve based the container on the node image, tagged at version 16
We’re then adding the packages that we need in order to get headless Chrome working (needed for Critical CSS generation), as well as other libraries for the Sharp image library to work effectively.
By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it available as nystudio107/node-dev-base on DockerHub.
Since it’s pre-built, we don’t have to build it every time, and can layer on top of this image anything project-specific via the node-dev-craft container image:
FROM nystudio107/node-dev-base:16-alpine
WORKDIR /var/www/project/
COPY ./npm_install.sh .
RUN chmod a+x npm_install.sh
# Run our webpack build in debug mode
# We'd normally use `npm ci` here, but by using `install`:
# - If `package-lock.json` is present, it will install what is in the lock file
# - If `package-lock.json` is missing, it will update to the latest dependencies
# and create the `package-lock-json` file
# This automatic running adds to the startup overhead of `docker-compose up`
# but saves far more time in not having to deal with out of sync versions
# when working with teams or multiple environments
CMD export CPPFLAGS="-DPNG_ARM_NEON_OPT=0" \
&& \
./npm_install.sh \
&& \
cd /var/www/project/buildchain/ \
&& \
npm run dev
Then, just like with the php-dev-craft image, this does a npm install every time the Docker container is created. While this adds some time, it saves far more in keeping everyone on the team or in multiple environments in sync.
We have to do the npm install as part of the Docker image CMD because the file system mounts aren’t in place until the CMD is run.
This allows us to update our npm dependencies just by deleting the package-lock.json file, and doing docker-compose up
The alternative is doing a docker exec -it craft_webpack_1 /bin/bash to open up a shell in our container, and running the command manually.
This container runs the shell script npm_install.sh when it starts up, to do an npm install if package-lock.json or node_modules/ is not present:
#!/bin/bash
# NPM Install shell script
#
# This shell script runs `npm install` if either the `package-lock.json` file or
# the `node_modules/` directory is not present`
#
# @author nystudio107
# @copyright Copyright (c) 2022 nystudio107
# @link https://nystudio107.com/
# @license MIT
cd /var/www/project/buildchain
if [ ! -f "package-lock.json" ] || [ ! -d "node_modules" ]; then
npm install
fi
Link All Aboard!
Hopefully this annotated Docker config has been useful to you. If you use Craft CMS, you can dive in and start using it yourself; if you use something else entirely, the concepts here should still be very salient for your project.
I think that Docker — or some other conceptually similar containerization strategy — is going to be an important technology going forward. So it’s time to jump on board.
To see how you can make your Docker builds even sweeter with make, check out the Using Make & Makefiles to Automate your Frontend Workflow article.
As mentioned earlier, the Docker config used here is used in both the devMode.fm GitHub repo, and in the nystudio107/craft boilerplate Composer project if you want to see some “in the wild” examples.
Happy containerizing!