My Docker Setup
When it comes to Docker, I use Docker Compose
to set up and link all of my containers together. It's rare that I have a
single container, though many of my Sculpin-based
sites live quite comfortably inside of an nginx container, but even those
take advantage of volumes. For a basic three-tiered application, I start
off with this basic docker-compose.yml
file:
# docker-compose.dev.yml
version: '2'
volumes:
mysqldata:
driver: local
services:
nginx:
image: nginx
volumes:
- ./:/var/www:ro
- ./app/nginx/default.conf:/etc/nginx/conf.d/default.conf
links:
- phpserver
phpserver:
build:
context: ./
dockerfile: ./phpserver.dockerfile
working_dir: /var/www/public
volumes:
- ./:/var/www/
links:
- mysqlserver
mysqlserver:
image: mysql
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_db
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: rootpassword
volumes:
- mysqldata:/var/lib/mysql
composer:
entrypoint: /bin/true
build:
context: ./
dockerfile: ./composer.dockerfile
volumes:
- ./:/app
I tend to use the stock nginx image supplied on the Docker Hub, as well as the official MySQL image. Both of these tend to work out of the box without much extra configuration other than mounting some config files, like I do above for nginx.
Most of my PHP projects tend to need extensions, so I use the following Dockerfile for PHP:
FROM php:fpm
RUN docker-php-ext-install pdo pdo_mysql
COPY ./ /var/www
It uses the stock FPM tag supplied by the PHP image, and I generally use the full-sized version of the images. They do make available images built off of Alpine Linux which are much smaller, but I've had issues with trying to build some extensions. I also have a COPY command here because this is the same Dockerfile I use for production, on development this is a wasted operation.
The other thing I do is define a service for Composer, the package manager for PHP. The Dockerfile for it mirrors the one for PHP, except it is built using the composer/composer image and it doesn't copy any files into itself as it never goes into production.
FROM composer/composer
RUN docker-php-ext-install pdo pdo_mysql
As is pretty standard, nginx links to PHP, and PHP links to MySQL.
With a docker-compose -f docker-compose.dev.yml up -d
I can have my environment build itself and be all ready to go.
Why the Composer Service?
I'm a big fan of containerizing commands, as it reduces the amount of
stuff I have installed on my host machine. As Composer is a part of my workflow, which I'll go over more in a minute, I build
a custom image specific to this project with all the needed extensions. Without doing this, I will have to run composer either
from my host machine directly, which can cause issues with missing extensions, PHP version mismatches, etc, or I have to run
composer with the --ignore-platform-reqs
flag, which can introduce dependency problems with extensions.
Building my own image makes it simple to script a custom, working Composer container per project.
The entrypoint: /bin/true
line is there just to make the container that Docker Compose creates exit right away, as there is
not currently a way to have Compose build an image but not attempt to run it.
The other thing you can do is download the PHAR package of composer, and run it using the PHP image generated by the project.
Custom Functions
I hate typing, so I have a few shell functions that make working with my toolchain a bit easier. I use both a Mac and ArchLinux, so I standardized on using the zsh shell. This makes it easier to move my shell scripts from one machine to another. Since I tend to run the PHP and Composer commands regularly, I have two functions I define in zsh that look to see if there is an image available for the project I'm in, otherwise they default to stock images:
# ~/.zshrc
function docker-php() {
appname=$(basename `pwd -P`)
appname="${appname/-/}"
imagename='php:cli'
output=$(docker images | grep "${appname}_phpserver")
if [ "$?" = "0" ]; then
imagename="${appname}_phpserver"
fi
docker run -ti --rm -v $(pwd):/app -w /app $imagename php $*
}
function docker-composer() {
appname=$(basename `pwd -P`)
appname="${appname/-/}"
imagename='composer/composer'
output=$(docker images | grep "${appname}_composer")
if [ "$?" = "0" ]; then
imagename="${appname}_composer"
fi
docker run --rm -v ~/.composer:/root/.composer -v $(pwd):/app -v ~/.ssh:/root/.ssh $imagename $*
}
I can now run docker-php
to invoke a PHP CLI command that uses a projects phpserver
image, and docker-composer
to do
the same with Composer. I could clean these up, and probably will in the future, but for now they get the job done.
A General Workflow
By using Docker Compose and the custom functions, I'm pretty well set. I copy all of these files into a new directory, run
my docker-composer
command to start requiring libraries, and I'm all set. If I need to use a skeleton project I will just
create it in a sub-folder of my project and move everything up one level.
For applications that are being built against one specific version of PHP, I end here, and I run my unit tests using the
docker-php
function that I have defined. If I need to have multiple versions of PHP to test against, I'll make stub
services like I did with the composer
service.
Any custom commands above and beyond this get bash scripts in the project.
Deployment
Deployment is always done on a project-by-project basis. I tend to package up the application in one image for the most
part, and then rebuild the application using the new images. How I do that depends on the actual build process being
used, but it is a combination of using the above Dockerfiles for PHP and/or Docker Compose and stacking config files with -f
.
I skirt the whole dependency issue with Composer by normally running it with --ignore-platform-reqs
on the build server,
mostly so I don't clog the build server with more images than I need, and so that I don't have to install any more extensions
than needed on the build server.
Either way, the entire application is packaged in a single image for deployment.