My Docker Setup

When it comes to Docker, I use Docker Compose to set up and link all of my containers together. It's rare that I have a single container, though many of my Sculpin-based sites live quite comfortably inside of an nginx container, but even those take advantage of volumes. For a basic three-tiered application, I start off with this basic docker-compose.yml file:

version: '2'

    driver: local

    image: nginx
      - ./:/var/www:ro
      - ./app/nginx/default.conf:/etc/nginx/conf.d/default.conf
      - phpserver

      context: ./
      dockerfile: ./phpserver.dockerfile
    working_dir: /var/www/public
      - ./:/var/www/
      - mysqlserver

    image: mysql
      MYSQL_DATABASE: my_db
      MYSQL_USER: my_db
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: rootpassword
      - mysqldata:/var/lib/mysql

    entrypoint: /bin/true
      context: ./
      dockerfile: ./composer.dockerfile
      - ./:/app

I tend to use the stock nginx image supplied on the Docker Hub, as well as the official MySQL image. Both of these tend to work out of the box without much extra configuration other than mounting some config files, like I do above for nginx.

Most of my PHP projects tend to need extensions, so I use the following Dockerfile for PHP:

FROM php:fpm

RUN docker-php-ext-install pdo pdo_mysql

COPY ./ /var/www

It uses the stock FPM tag supplied by the PHP image, and I generally use the full-sized version of the images. They do make available images built off of Alpine Linux which are much smaller, but I've had issues with trying to build some extensions. I also have a COPY command here because this is the same Dockerfile I use for production, on development this is a wasted operation.

The other thing I do is define a service for Composer, the package manager for PHP. The Dockerfile for it mirrors the one for PHP, except it is built using the composer/composer image and it doesn't copy any files into itself as it never goes into production.

FROM composer/composer

RUN docker-php-ext-install pdo pdo_mysql

As is pretty standard, nginx links to PHP, and PHP links to MySQL.

With a docker-compose -f up -d I can have my environment build itself and be all ready to go.

Why the Composer Service?

I'm a big fan of containerizing commands, as it reduces the amount of stuff I have installed on my host machine. As Composer is a part of my workflow, which I'll go over more in a minute, I build a custom image specific to this project with all the needed extensions. Without doing this, I will have to run composer either from my host machine directly, which can cause issues with missing extensions, PHP version mismatches, etc, or I have to run composer with the --ignore-platform-reqs flag, which can introduce dependency problems with extensions.

Building my own image makes it simple to script a custom, working Composer container per project.

The entrypoint: /bin/true line is there just to make the container that Docker Compose creates exit right away, as there is not currently a way to have Compose build an image but not attempt to run it.

The other thing you can do is download the PHAR package of composer, and run it using the PHP image generated by the project.

Custom Functions

I hate typing, so I have a few shell functions that make working with my toolchain a bit easier. I use both a Mac and ArchLinux, so I standardized on using the zsh shell. This makes it easier to move my shell scripts from one machine to another. Since I tend to run the PHP and Composer commands regularly, I have two functions I define in zsh that look to see if there is an image available for the project I'm in, otherwise they default to stock images:

# ~/.zshrc
function docker-php() {
    appname=$(basename `pwd -P`)
    output=$(docker images | grep "${appname}_phpserver")
    if [ "$?" = "0" ]; then
    docker run -ti --rm -v $(pwd):/app -w /app $imagename php $*

function docker-composer() {
    appname=$(basename `pwd -P`)
    output=$(docker images | grep "${appname}_composer")
    if [ "$?" = "0" ]; then
    docker run --rm -v ~/.composer:/root/.composer -v $(pwd):/app -v ~/.ssh:/root/.ssh $imagename $*

I can now run docker-php to invoke a PHP CLI command that uses a projects phpserver image, and docker-composer to do the same with Composer. I could clean these up, and probably will in the future, but for now they get the job done.

A General Workflow

By using Docker Compose and the custom functions, I'm pretty well set. I copy all of these files into a new directory, run my docker-composer command to start requiring libraries, and I'm all set. If I need to use a skeleton project I will just create it in a sub-folder of my project and move everything up one level.

For applications that are being built against one specific version of PHP, I end here, and I run my unit tests using the docker-php function that I have defined. If I need to have multiple versions of PHP to test against, I'll make stub services like I did with the composer service.

Any custom commands above and beyond this get bash scripts in the project.


Deployment is always done on a project-by-project basis. I tend to package up the application in one image for the most part, and then rebuild the application using the new images. How I do that depends on the actual build process being used, but it is a combination of using the above Dockerfiles for PHP and/or Docker Compose and stacking config files with -f.

I skirt the whole dependency issue with Composer by normally running it with --ignore-platform-reqs on the build server, mostly so I don't clog the build server with more images than I need, and so that I don't have to install any more extensions than needed on the build server.

Either way, the entire application is packaged in a single image for deployment.

Back on December 10th, I launched by first book, Docker for Developers, on Leanpub. One of the things that I kind of glossed over, mostly because it wasn't the focus of the book, was at the the beginning of the "Containerizing Your Application" chapter. It was this:

Modern PHP applications do not generally tote around their vendor/ directory and instead rely on Composer to do our dependency injection. Let’s pull down the dependencies for the project.

$ docker run --rm -u $UID -v `pwd`:/app composer/composer install

This first initial run will download the image as you probably do not have this composer/composer image installed. This container will mount our project code, parse the composer.lock, and install our dependencies just like if we ran composer locally. The only difference is we wrapped the command in a docker container which we know has PHP and all the correct libraries pre-installed to run composer.

There's something very powerful in there that I'm not sure many people take away from the book. I spend most of my time showing how Docker is used and how to get your application into it, and the book answers the question which many people have at the outset - how do I get my application into Docker?

One thing many people overlook is that Docker is not just a container for servers or other long-running apps, but it is a container for any command you want to run. When you get down to it, that is all Docker is doing, just running a single command (well, if done the Docker way). Most people just focus on long running executables like servers.

Any sort of binary can generally be pushed into a container and since Docker can mount your host file system you can start to containerize any binary executable. In the Composer command above I've gotten away from having a dedicated Composer command, or even phar, on my development machines and just use the Dockerized version.


Less maintenance and thinking.

Docker has become a standard part of my everyday workflow now even if the project I'm working on isn't running inside of a Docker container. I no longer have to install anything more than Docker to get my development tools I need. Let's take Composer for example.

Putting Composer in a Container

Taking a look at Composer, it is just a phar file that can be downloaded from the internet. It requires PHP with a few extensions installed.

Let's make a basic Dockerfile and see how that works:

FROM php:7

RUN curl -sS | php -- --install-dir=/usr/local/bin --filename=composer

ENTRYPOINT ["composer"]
CMD ["--version"]

We can then build it with the following:

docker build -t composer .

I should then be able to run the following and get the Composer version:

docker run -ti --rm composer

Great! There's a problem though. Go ahead and try to install a few things, and eventually you'll get an error stating that the zip extension isn't installed. We need to install and enable it through the docker-php-ext-* commands available in the base image. It has some dependencies so we will install those through apt as well.

FROM php:7

RUN apt-get update && \
  DEBIAN_FRONTEND=noninteractive apt-get install -y \
    libfreetype6-dev \
    libjpeg62-turbo-dev \
    libmcrypt-dev \
    libpng12-dev \
    libbz2-dev \
    php-pear \
    curl \
    git \
    subversion \
  && rm -r /var/lib/apt/lists/*

RUN docker-php-ext-install zip
RUN curl -sS | php -- --install-dir=/usr/local/bin --filename=composer

ENTRYPOINT ["composer"]
CMD ["--version"]

Now rebuild the image and try again. It will probably work. You won't have a vendor directory, but the command won't fail anymore. We need to mount our directory inside of the container, which brings us back to the original command:

docker run --rm -u $UID -v `pwd`:/app composer/composer install

That is a lot of stuff to type out, especially compared to just composer. Through the beauty of most CLI-based operating systems you can create Aliases though. Aliases allow you to type short commands that are expanded out into much longer commands. In my ~/.zshrc file (though you might have a ~/.bashrc or ~/.profile or something similar) we can create a new alias:

alias composer="docker run --rm -u $UID -v $PWD:/app composer"

Now I can simply type composer anywhere from the command line and my composer image will kick up.

A better version can be found in the Dockerfile for the PHP base image of composer/composer on Github, which I based the above on. In fact, I don't build my own Composer image, I use the existing one at since I don't have to maintain it.

It isn't just PHP stuff

Earlier today I sent out the following tweet after getting frustrated with running Grunt inside of Virtualbox.

It is a pain because some of Grunt's functionality relies on the filesystem notifying that a file has changed, and when Grunt runs inside of a virtual machine and is watching a mounted folder (be it NFS or anything else other than rsync) it can take up to 30 seconds for the notify signal to bubble up. That makes some slow development.

I hate polluting my work machine with development tools. I had a few people say they would love having Grunt and Bower inside of a Docker container, so I did just that.

I created a new container called dragonmantank/nodejs-grunt-bower and pushed it up as a public repository on the Docker Hub.

Since these images are pre-built I don't have to worry about any dependencies they might need, and setting up a new machine for these tools now is down to installing Docker (which is going to happen for me anyway) and setting up the following aliases:

alias composer="docker run --rm -u $UID -v $PWD:/app composer/composer"
alias node="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower node"
alias grunt="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower grunt"
alias npm="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower npm"
alias bower="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower bower"

The first time I run one of the commands the image is automatically downloaded so I don't even have to do anything other than just run the command I want.

Start Thinking about Dockerizing Commands

Don't think that Docker is only about running servers or daemons. Any binary can generally be put inside of a container, and you might as well make your life easier by making your tools easier to install and maintain.

OK, so the title sounds like it comes off as a bit harsh. Today it was announced that RogueWave Software aquired Zend, and RogueWave could now offer and support a full LAMP stack that many enterprise customers were already running. Zend is a staple in the PHP community, with it's founders Andi Gutmans and Zeev Suraski working on the Zend Engine (the thing that turns all of our PHP code into something useful), and Zend's suite of software including Zend Framework, Zend Server, and Zend Studio.

Looking at RogueWave, Zend's software will be a good compliment to what RogueWave already offers. So congrats to both parties.

Let's not stop Twitter though, and my timeline thus far is filled with congratulations all the way to doomsday predictions on the future of PHP. Let's break down what I think will happen.

Zend as they are will disappear

From what I've seen of traditional enterprise mergers, we've got a good two to three years before we start seeing anything major. There is traditionally a grace period where the newly aquired company is allowed to function like they had for a few years as everyone figures out what is going on, especially when the aquisition is amicable. Zend will continue to look like Zend for a while.

After that grace period wears off, we'll start to see changes I'm sure. Zend software will start to be licensed like however RogueWave's software is licensed, release schedules will start to match up, synergies between projects will be more heavily looked at, and so on. Whether any of that is good or bad I don't know, I know little of RogueWave personally, but Zend isn't going to stop acting like Zend overnight.

I hope that anyway. I'm looking forward to a few more ZendCons in Vegas.

The Zend Engine will change licenses

PHP, and the Zend Engine, currently follow the PHP License. There's a line at the top though that has people worried:

Copyright (c) 1999-2006 Zend Technologies Ltd. All rights reserved.

Zend holds the copyright to the Zend Engine, and thus the ability to set the license on the Zend Engine. What's the Zend Engine? It's the thing that makes PHP... well, PHP. It turns our written code into something servers understand, and makes things work. The only major player that compares to it is HHVM (yes, there are others, but HHVM is the only one I've seen with real traction).

So, as copyright holder, Zend/RogueWave is well within their rights to change the license to something more permissive, or lock it down. It is their choice.

If they do decide to do that, they can't change it retroactively. The PHP Community as a whole can continue to use previous versions of the Zend Engine, as long as they continue to follow the PHP License, and ignore the "new" Zend Engine. Life would find a way.

There's precedent for that in fact, as when Zend suddenly showed up with phpng, there was some talk about not using it. We're a fickle group, and PHP internals could, and would, move away from the Zend Engine if needed. We'd also gladly continue to use older versions of Zend Engine before the license change.

Worse case, we're all switching to HHVM and we have a few minor bugs to figure out.

Licensing Changes [EDIT - 2015-10-06 2:05pm]

A few people have brought to my attention my mistating that Zend can change the license. It's more complicated then I let on above, but as the copyright holder to the Zend Engine Zend could change the license. This will take a bit of work though, because in PHP each contributor keeps copyright over the code that they themselves have writtern. By contributing code to PHP, we don't really have one specific copyright holder, anyone that has contributed has a bit of say over the license change.

To top that off, everyone would have to agree to the license change. Joomla went through a similar process when they tried to change the license on the Joomla Framework code to LGPL. This meant determining who contributed under the old licenses and getting them to sign off on the new license. It was a tremendous undertaking, but they did it.

So, Zend has the copyright on the Zend Engine, and can attempt to change the license, if everyone agrees. I don't forsee that happening. I'd bet we'd replace the engine long before that happens.

For a bit of a doomsday scenario, I wouldn't rule out the possibility of a new engine from RogueWave that is compatible with Zend Engine/Whatever we use in the future, much like HHVM is. HHVM has already proven that there is a market for an enhanced PHP that is compatible with Zend Engine PHP but has some nice things added. That would allow RogueWave to offer an "enterprise PHP" to their customers that they control, much like we see Oracle roll their own version of Redhat.

If this happens, I hope RogueWave calls it the "Rogue Engine." RogueWave, feel free to contact me for payment on the usage of that name.

Zend Framework will die

No it won't. Zend Framework, while a nice entry point for developrs to get into Zend's over-arching product line, is an open source project. Anyone can fork it and work on it. Zend Framework is also a major player in the PHP Framework space, with a vibrant community and a huge userbase. Granted, the main contributers to Zend Framework are Zend employees, but the license is permissive and I'm sure that people will still work on it in the event of RogueWave no longer wanting to support it.

RogueWave seems to invest heavily in Open Source software though, and Zend Framework will work well with their customers. I doubt the future of Zend Framework is anything to worry about.

So, Congrats to RogueWave and Zend

I, for one, want to congratulate Zend and RogueWave on their partnership and merger. They seem to compliment each other, and it just means that PHP will get better support in enterprises.

In a few years I might eat my words, but I'm sure right now my friends at Zend will enjoy themselves going forward.