Andrew Welch · Insights · #deployment #devops #atomic

Published , updated · 5 min read ·


Please consider 🎗 sponsoring me 🎗 to keep writing articles like this.

Atomic Deployments Without Tears

Learn how to use atom­ic deploy­ments to auto­mat­i­cal­ly and safe­ly deploy changes to your web­site with zero down­time using Con­tin­u­ous Inte­gra­tion (CI) tools

Once you have devel­oped a web­site, you then have to face the chal­lenge of deploy­ing that web­site to a live pro­duc­tion envi­ron­ment where the world can see it.

Back in the mean old days, this meant fir­ing up an FTP client to upload the web­site to a remote server.

This type of “cowboy” deployment is not the best choice

The rea­son doing it this way isn’t so great is that it’s a man­u­al, error-prone process. Many ser­vices in the form of Con­tin­u­ous Inte­gra­tion tools have sprung up to make the process much eas­i­er for you, and impor­tant­ly, automated.

Let computers do the boring, repetitious work that they are good at

We want to be able to deploy our web­site changes with zero downtime.

This arti­cle will show you how you can lever­age the CI tool buddy.works to atom­i­cal­ly deploy your Craft CMS web­sites like a pro.

How­ev­er, the con­cepts pre­sent­ed here are uni­ver­sal, so if you’re using some oth­er CI tool or CMS/​platform, that’s total­ly fine. Read on!

Link Anatomy of a web project

Let’s have a look at what a typ­i­cal project set­up might look like:

Sim­pli­fied web project setup

We work on the project in our local devel­op­ment envi­ron­ment, whether indi­vid­u­al­ly or with a team of oth­er devel­op­ers. We push our code changes up to a git repos­i­to­ry in the cloud.

Local development is “where the magic happens”

The git repos­i­to­ry is where all source code is kept, and allows us to work with mul­ti­ple peo­ple or mul­ti­ple revi­sions with­out fear. This git repo can be host­ed via GitHub, Git­Lab, or any num­ber of oth­er places.

We may also be using cloud file stor­age such as Ama­zon S3 as a place to store the client-uploaded con­tent, as described in the Set­ting Up AWS S3 Buck­ets + Cloud­Front CDN for your Assets article.

A gen­er­al work­flow for code is:

  • Push code changes from local devel­op­ment up to your git repo
  • Pull code changes down from your git repo to your live pro­duc­tion or stag­ing servers

If you’re work­ing on a team or in mul­ti­ple envi­ron­ments, you may also be pulling code down to your local devel­op­ment envi­ron­ment from your git repo as well, to stay in sync with changes oth­er peo­ple have made.

Link Non-Atomic Deployment Flow

But how do you pull code changes down to your live pro­duc­tion or stag­ing servers?

Deploy­ment is get­ting your code from your local devel­op­ment envi­ron­ment to your live pro­duc­tion server. 

A sim­ple method (dubbed the #YOLO method by Matthew Stein) could be to trig­ger a shell script when we push to the master branch of our pro­jec­t’s git repo:

cd /home/forge/devmode.fm
git pull origin master
cd /home/forge/devmode.fm/cms
composer install --no-dev --no-progress --no-interaction --prefer-dist --optimize-autoloader
echo "" | sudo -S service php7.1-fpm reload

In my case, this is how I was pre­vi­ous­ly doing deploy­ments for the dev​Mode​.fm web­site: it’s just a shell script that’s exe­cut­ed when a web­hook that is trig­gered when we push to the master branch of our git repo.

Line by line, here’s what this shell script does:

  1. Change direc­to­ries to the root direc­to­ry of our project
  2. Pull down the lat­est changes from the master branch of the pro­jec­t’s git repo
  3. Change direc­to­ries to the root of the Craft CMS project
  4. Run composer install to install the lat­est com­pos­er depen­den­cies spec­i­fied in the composer.lock file
  5. Restart php-fpm to clear our opcache
What could possibly go wrong?

For a hob­by project site, this is total­ly fine.

Non-atom­ic deploy flow

But there are down­sides to doing it this way:

  • The deploy­ment is done in mul­ti­ple steps
  • The work hap­pens on the pro­duc­tion serv­er, which is also serv­ing fron­tend requests
  • The entire git repo is deployed to the serv­er, when only part of it is actu­al­ly need­ed on the pro­duc­tion server
  • If there’s a prob­lem with the deploy, the site could be left broken
  • Any web­site CSS/​JavaScript assets need to built in local devel­op­ment, and checked into the git repo

You might notice that there are a num­ber of steps list­ed, and some of the steps such as git pull origin master and composer install can be quite lengthy processes.

And we’re doing them in situ, so if some­one vis­its the web­site when we’re in the mid­dle of pulling down our code, or Com­pos­er is in the mid­dle of installing PHP pack­ages… that per­son may see errors on the frontend.

The fact that there are mul­ti­ple, lengthy steps in this process makes it a non-atom­ic deploy­ment.

Link Atomic Deployment Flow

So while we have an auto­mat­ed deploy­ment method, it’s a bit frag­ile in that there’s a peri­od of time dur­ing which peo­ple vis­it­ing our web­site may see it bro­ken. To solve this, let’s intro­duce how an atom­ic deploy­ment would work.

An atom­ic deploy­ment is just a fan­cy nomen­cla­ture for a deploy­ment that hap­pens in such a way that the switch to the new ver­sion of the site as a sin­gle — or atom­ic — step.

This allows for zero down­time, and no weird­ness in par­tial­ly deployed sites.

An atomic deployment is a magician’s finger-snap and “tada”!

We’re going to set up our atom­ic deploy­ments using buddy.works, which is a tool that I’ve cho­sen because it is easy to use, but also very powerful.

There’s a free tier that you can use for up to 5 projects while you’re test­ing it out, you can give it a whirl, or you can use some oth­er deploy­ment tool like Envoy­er (and there are many oth­ers). The prin­ci­ple is the same.

Here’s what an atom­ic deploy­ment set­up might look like:

Atom­ic deploy flow

Note that we’re still doing the same work as in our non-atom­ic deploy­ment, but we’re chang­ing where and how that work is done.

This nice­ly solves all of the down­sides we not­ed in our non-atom­ic deployment:

  • The switchover to the new­ly deployed web­site code hap­pens in a sin­gle atom­ic step
  • No work is done on the live pro­duc­tion serv­er oth­er than deploy­ing the files
  • Only the parts of the project need­ed to serve the web­site are deployed
  • If there’s a prob­lem with the build, it nev­er reach­es the server
  • Any web­site CSS/​JavaScript assets are built in the cloud”

So this is all won­der­ful, but how does it work? Con­tin­ue on, dear reader!

Link Atomic Deployments Under the Hood

We’ll get to the actu­al set­up in a bit, but first I think it’s instruc­tive to see how it actu­al­ly works under the hood.

As usu­al, we’ll be using the dev​Mode​.fm web­site as our guinea pig, the source code of which is avail­able in the nystudio107/​devmode repo.

Our project root direc­to­ry looks like this on our pro­duc­tion server:

forge@nys-production ~/devmode.fm $ ls -Al
total 32
lrwxrwxrwx  1 forge forge   49 Jun 28 19:08 current -> releases/33a5a7f984521811c5db597c7eef1c76c00d48e2
drwxr-xr-x  7 forge forge 4096 Jun 27 01:39 deploy-cache
-rw-rw-r--  1 forge forge 2191 Jun 22 18:14 .env
drwxrwxr-x 12 forge forge 4096 Jun 28 19:08 releases
drwxrwxr-x  5 forge forge 4096 Jun 22 18:11 storage
drwxrwxr-x  2 forge forge 4096 Jun 26 12:30 transcoder

This may look a lit­tle for­eign to you, but bear with me, you’ll get it!

The deploy-cache/ direc­to­ry is where files are stored as they are being uploaded to the serv­er. In our case, it looks like this:

forge@nys-production ~/devmode.fm $ ls -Al deploy-cache/
total 328
-rw-r--r--  1 forge forge   2027 Jun 26 22:46 composer.json
-rw-r--r--  1 forge forge 287399 Jun 27 01:39 composer.lock
drwxr-xr-x  4 forge forge   4096 Jun 27 01:39 config
-rwxr-xr-x  1 forge forge    577 Jun 23 07:25 craft
-rw-r--r--  1 forge forge    330 Jun 23 07:25 craft.bat
-rw-r--r--  1 forge forge   1582 Jun 23 07:25 example.env
drwxr-xr-x  3 forge forge   4096 Jun 23 07:25 modules
drwxr-xr-x 11 forge forge   4096 Jun 23 07:25 templates
drwxr-xr-x 60 forge forge   4096 Jun 27 01:40 vendor
drwxr-xr-x  5 forge forge   4096 Jun 28 19:08 web

This should look pret­ty famil­iar to you if you’re a Craft CMS devel­op­er, it’s the project root for the actu­al Craft CMS project. Check out the Set­ting up a New Craft CMS 3 Project arti­cle for more infor­ma­tion on that.

Since this is a cache direc­to­ry, the con­tents can be delet­ed with­out any ill effect, oth­er than our next deploy­ment will be slow­er since it’ll need to be done from scratch.

Next let’s have a look at the releases/ directory:

forge@nys-production ~/devmode.fm $ ls -Al releases/
total 48
drwxr-xr-x  7 forge forge 4096 Jun 27 14:17 2c8eef7c73f20df9d02f6f071656331ca9e08eb0
drwxr-xr-x  7 forge forge 4096 Jun 28 19:08 33a5a7f984521811c5db597c7eef1c76c00d48e2
drwxrwxr-x  7 forge forge 4096 Jun 26 22:48 42372b0cd7a66f98d7f4dc83d8d99c4d9a0fb1f6
drwxrwxr-x  7 forge forge 4096 Jun 27 01:43 7b3d57dfedf5bf275aeddc6d799e3264e02d2b88
drwxrwxr-x  8 forge forge 4096 Jun 26 21:21 8c2448d252651b8cb0d69a72e327dac3541c9ba9
drwxr-xr-x  7 forge forge 4096 Jun 27 14:08 9b5c8c7cf6a7111220b66d21d811f8e5a1800507
drwxrwxr-x  8 forge forge 4096 Jun 23 08:16 beaef13f5bda9d7c2bb0e88b300f68d3b663528e
drwxrwxr-x  8 forge forge 4096 Jun 26 21:26 c56c13127b4a5ff779a155a211c07f604a4dcf8b
drwxrwxr-x  7 forge forge 4096 Jun 27 14:04 ce831a76075f57ceff8822641944e255ab9bf556
drwxrwxr-x  8 forge forge 4096 Jun 23 07:57 ebba675ccd2bb372ef82795f076ffd933ea14a31

Here we see 10 real­ly weird­ly named direc­to­ries. The names here don’t real­ly mat­ter (they are auto­mat­i­cal­ly gen­er­at­ed hash­es), but what does mat­ter is that each one of these direc­to­ries con­tains a full deploy­ment of your website.

You can set how many of these direc­to­ries should be kept on the serv­er, in my case I have it set to 10.

If you look care­ful­ly at the current symlink:

lrwxrwxrwx  1 forge forge   49 Jun 28 19:08 current -> releases/33a5a7f984521811c5db597c7eef1c76c00d48e2

…you’ll see that it actu­al­ly points to the cur­rent deploy­ment in the releases/ direc­to­ry (notice that the hash-named direc­to­ry it points to has the lat­est mod­i­fi­ca­tion date on it, too).

So when a deploy­ment happens:

  • Files are synced into the deploy-caches/ direc­to­ry (we’ll get into this more later)
  • Then those files are copied from deploy-caches/ direc­to­ry to a hash-named direc­to­ry in the releases/ directory
  • After every­thing is done, the current sym­link is updat­ed to point to the lat­est deployment

That’s it! That’s the atom­ic part: the chang­ing of the current sym­link is the sin­gle atom­ic oper­a­tion that makes that ver­sion of the web­site live.

We just have to make sure that our web serv­er root path con­tains the sym­link, so we can swap out where it points to as needed:

    root /home/forge/devmode.fm/current/web;

If you ever encounter a regres­sion, you can roll your web­site back to a pre­vi­ous revi­sion just by chang­ing the current symlink.

Also note that we have storage/ and transcoder/ direc­to­ries in our project root, as well as a .env file.

These are all direc­to­ries & files that we want to per­sist between and be shared by each atom­ic deploy­ment. Since each deploy­ment is a clean slate, we just move any­thing we need to keep per­sis­tent into the root direc­to­ry, and sym­link to them from each deployment.

The .env file is some­thing you’ll have to cre­ate your­self man­u­al­ly, using the example.env as a guide.

The storage/ direc­to­ry is Craft’s run­time stor­age direc­to­ry. We keep this as a per­sis­tent direc­to­ry so that log files and oth­er Craft run­time files can per­sist across atom­ic deployments.

The transcoder/ direc­to­ry is used to store the transcod­ed audio for the pod­cast, as cre­at­ed by our Transcoder plu­g­in. It’s very project spe­cif­ic, so you’re unlike­ly to need it in your projects.

Let’s have a look at the current deploy­ment in the releases/ directory:

forge@nys-production ~/devmode.fm $ ls -Al releases/33a5a7f984521811c5db597c7eef1c76c00d48e2/
total 320
-rw-r--r--  1 forge forge   2027 Jun 29 14:10 composer.json
-rw-r--r--  1 forge forge 287399 Jun 29 14:10 composer.lock
drwxr-xr-x  4 forge forge   4096 Jun 29 14:10 config
-rwxr-xr-x  1 forge forge    577 Jun 29 14:10 craft
-rw-r--r--  1 forge forge    330 Jun 29 14:10 craft.bat
lrwxrwxrwx  1 forge forge     27 Jun 29 14:10 .env -> /home/forge/devmode.fm/.env
-rw-r--r--  1 forge forge   1582 Jun 29 14:10 example.env
drwxr-xr-x  3 forge forge   4096 Jun 29 14:10 modules
lrwxrwxrwx  1 forge forge     30 Jun 29 14:10 storage -> /home/forge/devmode.fm/storage
drwxr-xr-x 11 forge forge   4096 Jun 29 14:10 templates
drwxr-xr-x 60 forge forge   4096 Jun 29 14:10 vendor
drwxr-xr-x  6 forge forge   4096 Jun 29 14:11 web

N.B.: this is the exact same as doing ls -Al current/ since the current sym­link points to this lat­est deployment.

Here we can see the cur­rent deploy­ment root, with the .env & storage alias­es in place, point­ing back to the per­sis­tent files/​directories in our project root.

Some­thing that might not be imme­di­ate­ly appar­ent is that we’re only deploy­ing part of what is in our project git repo. The git repo root looks like this:

❯ ls -Al
total 80
-rw-r--r--   1 andrew  staff   868 Jun 22 17:24 .gitignore
-rw-r--r--   1 andrew  staff  1828 Feb 18 10:22 CHANGELOG.md
-rw-r--r--   1 andrew  staff  1074 Feb  4 09:54 LICENSE.md
-rw-r--r--   1 andrew  staff  7461 Jun 29 09:03 README.md
-rw-r--r--   1 andrew  staff  5094 Jun 27 14:15 buddy.yml
drwxr-xr-x  10 andrew  staff   320 Feb 17 16:58 buildchain
drwxr-xr-x  16 andrew  staff   512 Jun 27 14:06 cms
-rwxr-xr-x   1 andrew  staff  2064 Mar 17 16:37 docker-compose.yml
drwxr-xr-x  10 andrew  staff   320 Feb 17 16:58 docker-config
drwxr-xr-x   7 andrew  staff   224 Mar 17 16:37 scripts
drwxr-xr-x  12 andrew  staff   384 Feb 17 15:51 src
-rw-r--r--   1 andrew  staff    47 Jun 27 14:06 tsconfig.json

So instead of deploy­ing all of the source code and build tools that aren’t need­ed to serve the web­site (they are only need­ed to build it), we instead just deploy just what’s in the cms/ directory.

Nice.

Now that we know how it works under the hood, let’s cre­ate the atom­ic deploy­ment pipeline!

Link Step 1: Creating a new project

We’ll go step by step through how to build a sim­ple but effec­tive atom­ic deploy­ment with buddy.works.

The deploy­ment pipeline we’re going to set up will:

So let’s get down to it

After log­ging into buddy.works, make sure that you’ve linked buddy.works to your git repo provider (such as GitHub, Git­Lab, etc.). It needs this to allow you to choose a git repo for your atom­ic deploy­ment set­up, and also to be noti­fied when you push code to that git repo.

You can con­fig­ure this and oth­er set­tings by click­ing on your user icon in the upper-right cor­ner of the screen, and choos­ing Man­age your project.

Once that’s all set, click on New Project from your Dashboard:

Cre­at­ing a new project

Next click on the Add a new pipeline but­ton to cre­ate a new deploy­ment pipeline for this project. A pipeline is just a series of instruc­tions to exe­cute in sequence.

Adding a pipeline

Set the Name to Build & Deploy to Pro­duc­tion, set Trig­ger Mode to On push and then set the Trig­ger to Sin­gle Branch and mas­ter (or what­ev­er the name of your pri­ma­ry git repo branch is).

Then click on + Site URL, Cur­rent­ly deployed revi­sion, Clone depth & Vis­i­bil­i­ty to dis­play more options, and set the Tar­get web­site URL to what­ev­er your live pro­duc­tion web­site URL is.

We won’t be chang­ing any­thing else here, so click on Add a new pipeline to cre­ate a new emp­ty pipeline (you can have as many pipelines as you like per project).

Link Step 2: Setting Variables

Before we add any actions to our pipeline, we’re going to set some envi­ron­ment vari­ables for use in the buddy.works build pipeline.

Click on the Edit pipeline set­tings link on the right, then click on Vari­ables:

Set­ting envi­ron­ment variables

We’re adding these vari­ables to our pipeline to make it eas­i­er to build our indi­vid­ual actions, and make our pipeline gener­ic so it can be used with any project.

Add the fol­low­ing key/​value pair vari­ables by click­ing on Add a new vari­able, chang­ing them to suit your project (by con­ven­tion, envi­ron­ment vari­ables are SCREAMING_SNAKE_CASE):

  • PROJECT_SHORTNAME — devmode — a short name for the project with no spaces or punc­tu­a­tion; it’s used to cre­ate work­ing direc­to­ries in the buddy.works containers
  • PROJECT_URL — https://devmode.fm — a URL to your live pro­duc­tion website
  • REMOTE_PROJECT_ROOT — /home/forge/devmode.fm — a path to the root direc­to­ry of the project on the server
  • REMOTE_SSH_HOST — devmode.fm — the host name that should be used to ssh into your server
  • REMOTE_SSH_USER — forge — the user name that should be used to ssh into your server

N.B.: the buddy.works docs say to use the vari­ables in a ${VARIABLE_NAME} for­mat, but you can also use them just as $VARIABLE_NAME (in fact the lat­ter is how they are auto-com­plet­ed for you).

These vari­ables are defined inside of the pipeline, but you can also have vari­ables that are project-wide, as well as work­space-wide in buddy.works.

Link Step 3: Execute: webpack build

Now that our vari­ables are all set, click on Actions and then click on the Add the first action button.

Cre­ate action Exe­cute: web­pack build”

Type webpack into the search field to find the Web­pack action, and click on it.

web­pack build run script

We’re assum­ing you are using the web­pack set­up described in the An Anno­tat­ed web­pack 4 Con­fig for Fron­tend Web Devel­op­ment arti­cle and the Dock­er set­up described in the An Anno­tat­ed Dock­er Con­fig for Fron­tend Web Devel­op­ment article.

Add the fol­low­ing script under the Run tab; it installs our npm pack­ages via npm install and then exe­cutes web­pack to build our build:

cd buildchain
npm install
npm run build

You can change this to be what­ev­er you need to exe­cute your CSS & JavaScript build, if you’re using some­thing oth­er than the afore­men­tioned setups.

If your assets will end up being served from a CDN, this you can set the PUBLIC_PATH envi­ron­ment vari­able used by the Dock­er­ized web­pack build to a $PUBLIC_PATH buddy.works envi­ron­ment variable:

cd buildchain
npm install
PUBLIC_PATH="$PUBLIC_PATH" npm run build

This will cause the built assets in the manifest.json to be pre­fixed with the CDN URL spec­i­fied in the $PUBLIC_PATH buddy.works variable.

Next click on the Envi­ron­ment tab, and change the Image to our cus­tom node-dev-base that we used in the An Anno­tat­ed Dock­er Con­fig for Fron­tend Web Devel­op­ment arti­cle, since it has every­thing we need in it for build­ing our CSS & JavaScript:

web­pack build environment

This Envi­ron­ment tab allows you to pick any Dock­er image you like — pub­lic or pri­vate — to use when run­ning you web­pack build in the cloud. The default is an old (but offi­cial) Node 6 image at the time of this writing.

Click­ing on the Action tab allows you to change the name of the action; change it to: Exe­cute: web­pack build.

Link Step 4: Execute: composer install

Next up we’ll cre­ate anoth­er action to our pipeline by click­ing on the + icon below the Exe­cute: web­pack build action.

Cre­ate action Exe­cute: com­pos­er install”

Type php into the search field to find the PHP action, and click on it.

com­pos­er install run script

We’re assum­ing you are using the Dock­er set­up described in the An Anno­tat­ed Dock­er Con­fig for Fron­tend Web Devel­op­ment article.

Add the fol­low­ing script under the Run tab; it changes direc­to­ries to the cms/ direc­to­ry, and then runs composer install with some flags:

cd cms
composer install --no-dev --no-progress --no-scripts --no-interaction --prefer-dist --optimize-autoloader --ignore-platform-reqs

You can change this to be what­ev­er you need to exe­cute to install your Com­pos­er pack­ages, if you’re using some­thing oth­er than the afore­men­tioned setup.

For exam­ple, when installing on pro­duc­tion, you often want to use the --no-dev flag with composer install so that you do not install any­thing in require-dev: (which are pack­ages for local devel­op­ment only).

Next click on the Envi­ron­ment tab, and change the Image to our cus­tom php-dev-base that we used in the An Anno­tat­ed Dock­er Con­fig for Fron­tend Web Devel­op­ment arti­cle, since it has every­thing we need for our PHP application:

com­pos­er install environment

This Envi­ron­ment tab allows you to pick any Dock­er image you like — pub­lic or pri­vate — to use when run­ning your composer install in the cloud. The default is php 7.4 image at the time of this writing.

Still on the Envi­ron­ment tab, scroll down to CUS­TOMIZE ENVI­RON­MENT and replace the entire shell script that is there by default with this if you’re using Ubun­tu-based Dock­er images:

echo "memory_limit=-1" >> /usr/local/etc/php/conf.d/buddy.ini
apt-get update && apt-get install -y git zip
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# php ext pdo_pgsql
docker-php-ext-install pdo_pgsql pgsql

If you’re using Alpine-based Dock­er images, you would use this instead, which uses Alpine’s apk pack­age manager:

echo "memory_limit=-1" >> /usr/local/etc/php/conf.d/buddy.ini
apk add git zip
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# php ext pdo_pgsql
docker-php-ext-install pdo_pgsql pgsql

This script runs inside the Dock­er con­tain­er to cus­tomize the envi­ron­ment by set­ting PHP to have no mem­o­ry lim­it, installing Com­pos­er, and then installing some Post­gres php exten­sions. If you’re using MySQL, you’d change it to:

# php ext pdo_mysql
docker-php-ext-install pdo_mysql mysql

In actu­al­i­ty, it does­n’t mat­ter, because we’re not even doing any­thing with the data­base on deploy currently.

Click­ing on the Action tab allows you to change the name of the action; change it to: Exe­cute: com­pos­er install.

Link Step 3: Rsync files to production

Now that we have our updat­ed web­site code from our git repo, our built CSS & JavaScript assets, and all of our Com­pos­er pack­ages in the Dock­er con­tain­er in the cloud, we need to deploy them to our pro­duc­tion server.

To do this, we’re going to use rsync to sync only the files that have changed to our deploy-cache/ directory.

Cre­ate anoth­er action to our pipeline by click­ing on the + icon below the Exe­cute: com­pos­er install action.

Cre­ate action Rsync files to production”

Type rsync into the search field to find the RSync action, and click on it.

Rsync set­up #1

Here we’ve cho­sen to syn­chro­nize just the cms/ direc­to­ry of our project with the deploy-caches/ direc­to­ry on our live pro­duc­tion server.

To allow buddy.works to access our live pro­duc­tion serv­er, we have to pro­vide it with how to con­nect to our serv­er. For­tu­nate­ly, we can use the envi­ron­ment vari­ables set up in step #1.

So set Host­name & Port to $REMOTE_SSH_HOST, Login to $REMOTE_SSH_USER, and Authen­ti­ca­tion mode to Buddy workspace key.

We’re using ssh keys here because the pro­vi­sion­er I use, Lar­avel Forge, dis­ables pass­word-based auth by default as a secu­ri­ty best practice.

Rsync set­up #2

If you’re going to use Buddy workspace key too, you’ll need to ssh into your live pro­duc­tion serv­er, and run the code snip­pet. This will add Bud­dy’s work­space key to your live pro­duc­tion server’s list of hosts that are autho­rized to con­nect to it.

Then set Remote path to $REMOTE_PROJECT_ROOT/deploy-cache. This tells the rsync action what direc­to­ry on the live pro­duc­tion serv­er should be synced with the cms/ direc­to­ry in our buddy.works Dock­er con­tain­er in the cloud.

Final­ly, check the following:

  • Com­press file data dur­ing the transer
  • Archive mode
  • Delete extra­ne­ous files
  • Recurse into directories

Using Rsync for our deploy­ment allows it to be very smart about deploy­ing only files that have actu­al­ly changed, and also com­press the files before they are trans­ferred over the wire.

N.B.: In the Ignore paths tab, you can add any direc­to­ries you want ignored dur­ing the sync

Click­ing on the Action tab allows you to change the name of the action; change it to: Rsync files to production.

Link Step 4: Atomic Deploy

Final­ly, we’re get­ting to the actu­al atom­ic deployment!

Cre­ate anoth­er action to our pipeline by click­ing on the + icon below the Rsync files to pro­duc­tion action.

Cre­ate action from tem­plate Atom­ic deployment”

This time we’re going to click on Tem­plates and then click on Atom­ic Deploy­ment. You’ll see some doc­u­men­ta­tion on what the Atom­ic Deploy­ment tem­plate does; click on Con­fig­ure this tem­plate:

Atom­ic deploy­ment tem­plate con­fig­ure #1

For Source, click on Pipeline Filesys­tem and leave Source path set to /

Set Host­name & Port to $REMOTE_SSH_HOST, Login to $REMOTE_SSH_USER, and Authen­ti­ca­tion mode to Buddy workspace key just like we did in step #3.

Atom­ic deploy­ment tem­plate con­fig­ure #2

Again we’re using the same Bud­dy work­space key we used in step #3, so we won’t need to re-add this key to our live pro­duc­tion server.

Set Remote path to the $REMOTE_ROOT_PATH buddy.works vari­able, and the dou­ble-neg­a­tive Don’t delete files set to Off. You can also con­fig­ure how many releas­es to keep on your serv­er via How many old releas­es should be kept.

Then click on Add this action.

We’re not quite done with this action though. Click on it again in the list of pipeline actions to edit it, and you’ll see some shell code the tem­plate added for us under RUN SSH COM­MANDS:

if [ -d "releases/$BUDDY_EXECUTION_REVISION" ] && [ "$BUDDY_EXECUTION_REFRESH" = "true" ];
then
 echo "Removing: releases/$BUDDY_EXECUTION_REVISION"
 rm -rf releases/$BUDDY_EXECUTION_REVISION;
fi
if [ ! -d "releases/$BUDDY_EXECUTION_REVISION" ];
then
 echo "Creating: releases/$BUDDY_EXECUTION_REVISION"
 cp -dR deploy-cache releases/$BUDDY_EXECUTION_REVISION;
fi
echo "Linking current to revision: $BUDDY_EXECUTION_REVISION"
rm -f current
ln -s releases/$BUDDY_EXECUTION_REVISION current
echo "Removing old releases"
cd releases && ls -t | tail -n +11 | xargs rm -rf

This is the code that han­dles cre­at­ing the hash-named revi­sion direc­to­ries, copy­ing files from the deploy-cache/ direc­to­ry, updat­ing the cur­rent sym­link, and trim­ming old releases.

You need­n’t grok all that it’s doing, we’re just going to make a small addi­tion to it to cre­ate and sym­link our per­sis­tent direc­to­ries & files:

if [ -d "releases/$BUDDY_EXECUTION_REVISION" ] && [ "$BUDDY_EXECUTION_REFRESH" = "true" ];
then
 echo "Removing: releases/$BUDDY_EXECUTION_REVISION"
 rm -rf releases/$BUDDY_EXECUTION_REVISION;
fi
if [ ! -d "releases/$BUDDY_EXECUTION_REVISION" ];
then
 echo "Creating: releases/$BUDDY_EXECUTION_REVISION"
 cp -dR deploy-cache releases/$BUDDY_EXECUTION_REVISION;
fi
echo "Creating: persistent directories"
mkdir -p storage
mkdir -p transcoder
echo "Symlinking: persistent files & directories"
ln -nfs $REMOTE_PROJECT_ROOT/.env $REMOTE_PROJECT_ROOT/releases/$BUDDY_EXECUTION_REVISION
ln -nfs $REMOTE_PROJECT_ROOT/storage $REMOTE_PROJECT_ROOT/releases/$BUDDY_EXECUTION_REVISION
ln -nfs $REMOTE_PROJECT_ROOT/transcoder $REMOTE_PROJECT_ROOT/releases/$BUDDY_EXECUTION_REVISION/web
echo "Linking current to revision: $BUDDY_EXECUTION_REVISION"
rm -f current
ln -s releases/$BUDDY_EXECUTION_REVISION current
echo "Removing old releases"
cd releases && ls -t | tail -n +11 | xargs rm -rf

Here we’re ensur­ing that the storage/ and transcoder/ direc­to­ries exist, and then we’re sym­link’ing them and our .env file from their per­sis­tent loca­tion in the project root in the appro­pri­ate places in the deployed website.

The transcoder/ direc­to­ry is used to store the transcod­ed audio for the pod­cast, as cre­at­ed by our Transcoder plu­g­in. It’s very project spe­cif­ic, so you’re unlike­ly to need it in your projects.

Click­ing on the Action tab allows you to change the name of the action; change it to: Atom­ic deploy.

N.B.: This tem­plate will cre­ate an addi­tion­al action for you named Upload files to $REMOTE_SSH_HOST — you can delete this action, because we’re using our Rsync action to do it.

Link Step 5: Prep Craft CMS

Cre­ate anoth­er action to our pipeline by click­ing on the + icon below the Atom­ic deploy action.

Tech­ni­cal­ly this action could be com­bined with Step #4, but log­i­cal­ly they do dif­fer­ent things, so keep­ing them sep­a­rate seems appropriate.

Cre­ate action Prep Craft CMS

Type ssh into the search field to find the SSH action, and click on it.

Prep Craft CMS set­up #1

Under RUN SSH COM­MANDS we have the fol­low­ing shell script:

# Ensure the craft script is executable
chmod a+x craft
# Restart our long running queue listener process
echo "" | sudo -S supervisorctl restart all
# Backup the database just in case any migrations or Project Config changes have issues
php craft backup/db
# Run pending migrations, sync project config, and clear caches
php craft clear-caches/all
php craft migrate/all
php craft project-config/sync

First, it restarts our long-run­ning queue lis­ten­er process, as dis­cussed in the Robust queue job han­dling in Craft CMS article.

Next it backs up the data­base, just in case any migra­tions or Project Con­fig changes cause issues. 

You can never be too careful

Final­ly, it ensures that all of the migra­tions are run, Project Con­fig is synced, and all caches are cleared on every deploy.

Some­thing that’s miss­ing from here that you might be used to is restart­ing php-fpm to clear the opcache.

We don’t need to do this because we send $realpath_​root down to php-fpm, which means that the path is dif­fer­ent for every release, and we can just let the opcache work nat­u­ral­ly. For more infor­ma­tion on this, check out the PHP’s OPcache and Sym­link-based Deploys article.

Set Host­name & Port to $REMOTE_SSH_HOST, Login to $REMOTE_SSH_USER, and Authen­ti­ca­tion mode to Buddy workspace key just like we did in steps #3 & #4.

Prep Craft CMS set­up #2

Again we’re using the same Bud­dy work­space key we used in steps #3 & #4, so we won’t need to re-add this key to our live pro­duc­tion server.

Then set Work­ing direc­to­ry to $REMOTE_PROJECT_ROOT/current to tell buddy.works which direc­to­ry should be cur­rent when the script above is run.

Click­ing on the Action tab allows you to change the name of the action; change it to: Prep Craft CMS.

Link Step #6: Send notification to nystudio107 channel

Cre­ate anoth­er action to our pipeline by click­ing on the + icon below the Prep Craft CMS action.

This option­al action sends a noti­fi­ca­tion on deploy to the #nystudio107 chan­nel on the pri­vate nystudio107 Slack.

Cre­ate action Send noti­fi­ca­tion to nystudio107 channel”

Type slack into the search field to find the Slack action, and click on it.

Send noti­fi­ca­tion to nystudio107 chan­nel setup

You’ll have to grant buddy.works access to your Slack by auth’ing it, then set the Send mes­sage to:

[#$BUDDY_EXECUTION_ID] $BUDDY_EXECUTION_REVISION_SUBJECT - $BUDDY_EXECUTION_REVISION_COMMITTER_NAME

Or cus­tomize it how­ev­er you like, and con­fig­ure the Inte­gra­tion and Tar­get chan­nel as appro­pri­ate for your Slack.

Click­ing on the Action tab allows you to change the name of the action; change it to: Send noti­fi­ca­tion to nystudio107 channel.

Link The Golden Road (to unlimited deployment)

If all of this set­up seems like a whole lot of work to you, it’s real­ly not so bad once you are famil­iar with the buddy.works GUI.

How­ev­er, I also have good news for you. There’s a rea­son why we used envi­ron­ment vari­ables: buddy.works allows you to save your entire con­fig­u­ra­tion out to a buddy.yml file.

Go to your project view, and click on YAML con­fig­u­ra­tion: OFF and you’ll see:

buddy.yml for auto­mat­ed configuration

If you have a buddy.yml in your project root and switch your project to YAML con­fig­u­ra­tion: ON, then you’ll get your pipelines con­fig­ured for you auto­mat­i­cal­ly by the buddy.yml file:

- pipeline: "Build & Deploy to Production"
  trigger_mode: "ON_EVERY_PUSH"
  ref_name: "master"
  ref_type: "BRANCH"
  target_site_url: "https://devmode.fm/"
  trigger_condition: "ALWAYS"
  actions:
  - action: "Execute: webpack build"
    type: "BUILD"
    working_directory: "/buddy/$PROJECT_SHORTNAME"
    docker_image_name: "nystudio107/node-dev-base"
    docker_image_tag: "12-alpine"
    execute_commands:
    - "cd buildchain"
    - "npm install"
    - "npm run build"
    volume_mappings:
    - "/:/buddy/$PROJECT_SHORTNAME"
    trigger_condition: "ALWAYS"
    shell: "BASH"
  - action: "Execute: composer install"
    type: "BUILD"
    working_directory: "/buddy/$PROJECT_SHORTNAME"
    docker_image_name: "nystudio107/php-dev-base"
    docker_image_tag: "latest"
    execute_commands:
    - "cd cms"
    - "composer install --no-dev --no-progress --no-scripts --no-interaction --prefer-dist --optimize-autoloader --ignore-platform-reqs"
    setup_commands:
    - "echo \"memory_limit=-1\" >> /usr/local/etc/php/conf.d/buddy.ini"
    - "apt-get update && apt-get install -y git zip"
    - "curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer"
    - "# php ext pdo_mysql"
    - "docker-php-ext-install pdo_pgsql pgsql"
    volume_mappings:
    - "/:/buddy/$PROJECT_SHORTNAME"
    trigger_condition: "ALWAYS"
    shell: "BASH"
  - action: "Rsync files to production"
    type: "RSYNC"
    local_path: "cms/"
    remote_path: "$REMOTE_PROJECT_ROOT/deploy-cache"
    login: "$REMOTE_SSH_USER"
    host: "$REMOTE_SSH_HOST"
    port: "22"
    authentication_mode: "WORKSPACE_KEY"
    archive: true
    delete_extra_files: true
    recursive: true
    compress: true
    deployment_excludes:
    - "/.git/"
    trigger_condition: "ALWAYS"
  - action: "Atomic deploy"
    type: "SSH_COMMAND"
    working_directory: "$REMOTE_PROJECT_ROOT"
    login: "$REMOTE_SSH_USER"
    host: "$REMOTE_SSH_HOST"
    port: "22"
    authentication_mode: "WORKSPACE_KEY"
    commands:
    - "if [ -d \"releases/$BUDDY_EXECUTION_REVISION\" ] && [ \"$BUDDY_EXECUTION_REFRESH\" = \"true\" ];"
    - "then"
    - " echo \"Removing: releases/$BUDDY_EXECUTION_REVISION\""
    - " rm -rf releases/$BUDDY_EXECUTION_REVISION;"
    - "fi"
    - "if [ ! -d \"releases/$BUDDY_EXECUTION_REVISION\" ];"
    - "then"
    - " echo \"Creating: releases/$BUDDY_EXECUTION_REVISION\""
    - " cp -dR deploy-cache releases/$BUDDY_EXECUTION_REVISION;"
    - "fi"
    - "echo \"Creating: persistent directories\""
    - "mkdir -p storage"
    - "echo \"Symlinking: persistent files & directories\""
    - "ln -nfs $REMOTE_PROJECT_ROOT/.env $REMOTE_PROJECT_ROOT/releases/$BUDDY_EXECUTION_REVISION"
    - "ln -nfs $REMOTE_PROJECT_ROOT/storage $REMOTE_PROJECT_ROOT/releases/$BUDDY_EXECUTION_REVISION"
    - "ln -nfs $REMOTE_PROJECT_ROOT/transcoder $REMOTE_PROJECT_ROOT/releases/$BUDDY_EXECUTION_REVISION/web"
    - "echo \"Linking current to revision: $BUDDY_EXECUTION_REVISION\""
    - "rm -f current"
    - "ln -s releases/$BUDDY_EXECUTION_REVISION current"
    - "echo \"Removing old releases\""
    - "cd releases && ls -t | tail -n +11 | xargs rm -rf"
    trigger_condition: "ALWAYS"
    run_as_script: true
    shell: "BASH"
  - action: "Prep Craft CMS"
    type: "SSH_COMMAND"
    working_directory: "$REMOTE_PROJECT_ROOT/current"
    login: "$REMOTE_SSH_USER"
    host: "$REMOTE_SSH_HOST"
    port: "22"
    authentication_mode: "WORKSPACE_KEY"
    commands:
    - "# Ensure the craft script is executable"
    - "chmod a+x craft"
    - "# Restart our long running queue listener process"
    - "echo \"\" | sudo -S supervisorctl restart all"
    - "# Backup the database just in case any migrations or Project Config changes have issues"
    - "php craft backup/db"
    - "# Run pending migrations, sync project config, and clear caches"
    - "php craft clear-caches/all"
    - "php craft migrate/all"
    - "php craft project-config/apply"
    trigger_condition: "ALWAYS"
    run_as_script: true
    shell: "BASH"
  - action: "Send notification to nystudio107 channel"
    type: "SLACK"
    content: "[#$BUDDY_EXECUTION_ID] $BUDDY_EXECUTION_REVISION_SUBJECT - $BUDDY_EXECUTION_REVISION_COMMITTER_NAME"
    blocks: "[{\"type\":\"section\",\"fields\":[{\"type\":\"mrkdwn\",\"text\":\"*Successful execution:* <$BUDDY_EXECUTION_URL|Execution #$BUDDY_EXECUTION_ID $BUDDY_EXECUTION_COMMENT>\"},{\"type\":\"mrkdwn\",\"text\":\"*Pipeline:* <$BUDDY_PIPELINE_URL|$BUDDY_PIPELINE_NAME>\"},{\"type\":\"mrkdwn\",\"text\":\"*Branch:* $BUDDY_EXECUTION_BRANCH\"},{\"type\":\"mrkdwn\",\"text\":\"*Project:* <$BUDDY_PROJECT_URL|$BUDDY_PROJECT_NAME>\"}]}]"
    channel: "G6AKRT78V"
    channel_name: "devmode"
    trigger_condition: "ALWAYS"
    integration_hash: "5ef0d26820cfeb531cb10738"
  - action: "Send notification to devmode channel"
    type: "SLACK"
    trigger_time: "ON_FAILURE"
    content: "[#$BUDDY_EXECUTION_ID] $BUDDY_EXECUTION_REVISION_SUBJECT - $BUDDY_EXECUTION_REVISION_COMMITTER_NAME"
    blocks: "[{\"type\":\"section\",\"fields\":[{\"type\":\"mrkdwn\",\"text\":\"*Failed execution:* <$BUDDY_EXECUTION_URL|Execution #$BUDDY_EXECUTION_ID $BUDDY_EXECUTION_COMMENT>\"},{\"type\":\"mrkdwn\",\"text\":\"*Pipeline:* <$BUDDY_PIPELINE_URL|$BUDDY_PIPELINE_NAME>\"},{\"type\":\"mrkdwn\",\"text\":\"*Branch:* $BUDDY_EXECUTION_BRANCH\"},{\"type\":\"mrkdwn\",\"text\":\"*Project:* <$BUDDY_PROJECT_URL|$BUDDY_PROJECT_NAME>\"}]}]"
    channel: "G6AKRT78V"
    channel_name: "devmode"
    trigger_condition: "ALWAYS"
    integration_hash: "5ef0d26820cfeb531cb10738"
  variables:
  - key: "PROJECT_SHORTNAME"
    value: "devmode"
  - key: "PROJECT_URL"
    value: "https://devmode.fm"
  - key: "REMOTE_PROJECT_ROOT"
    value: "/home/forge/devmode.fm"
  - key: "REMOTE_SSH_HOST"
    value: "devmode.fm"
  - key: "REMOTE_SSH_USER"
    value: "forge"

The fact that we refac­tored things that change from project to project into envi­ron­ment vari­ables makes it super easy to re-use this con­fig on mul­ti­ple projects.

And here’s what the final pipeline looks like in the GUI:

buddy.works final deploy­ment pipeline

Link One more deploy for the road

The advan­tages that I find with buddy.works over tools like Ansi­ble & Pup­pet or ser­vices like Deploy­Bot & Envoy­er are that it’s very easy to set up, and you can run all of your build steps in Dock­er con­tain­ers in the cloud.

Because every­thing runs in Dock­er con­tain­ers in the cloud, you also do not need Com­pos­er or Node or any­thing else that’s used only to build the thing” installed on your server.

Git­Lab CI/CD works sim­i­lar­ly to this, and is also a sol­id choice. But I pre­fer buddy.works being decou­pled from where the git repo is host­ed, because this flex­i­bil­i­ty can be very handy when deal­ing with var­ied client needs & requirements.

There’s also plen­ty more that buddy.works can do that we haven’t explored here. For exam­ple, you’d typ­i­cal­ly set up anoth­er pipeline for your stag­ing serv­er, which would auto-deploy on push­es to the devel­op branch.

We also could go a step fur­ther with our deploy­ments and do blue/​green data­base deploy­ments if the project war­rant­ed it.

Auto­mat­ed accep­tance tests could be run in the buddy.works con­tain­ers, and deploy­ment would only hap­pen if they pass.

Or we could run acces­si­bil­i­ty tests on deploy, and block deploy­ment if there were regres­sions there.

The options are lim­it­less, and buddy.works makes it easy for me to explore them.

But what­ev­er deploy­ment tool you use… hap­py deploying!