Recently, with the new Macbook refresh for 2016, many developers have taken a good hard look at whether or not they want to stick with macOS and the hardware. Traditionally Macbook Pros have been an excellent kit to use, and even I used one for travel up until earlier this year. They had powerful CPUs, could be loaded with a good amount of RAM, and had unparalleled battery life. The fact that macOS was built on a Unix subsystem also helped, making it easier for developer tools to be built and worked with thanks to the powerful command line interface.

The new hardware refresh was less than stellar. All jokes aside about the Touch Bar, it was not the hardware refresh many of us were looking for. While it does mean that 2015 models might be cheaper, if you are looking for a new laptop, is it time to possibly switch to another OS?

Linux would be the closest analogue in terms of how things work, but not all hardware works well with it. You will also lose a lot of day-to-day software, but the alternatives might work for you. If you are looking at a new OS, I'd heavily look at Linux on good portable hardware like a Thinkpad X or T series laptop. Even a used Thinkpad (except the 2014 models) will serve you well for a long time if you go the Linux route.

Up until about August, I ran Linux day-to-day. My job change meant that I had to run software that just did not work well under Linux, so I switched back to Windows. Back in 2015 I wrote about my Windows setup, and now I think it's time for an update now that I'm on Windows full time. A lot of things have changed in the past year and while working on Windows before was pretty good, its even better now.

Windows 10 Pro

Yes, it might be watching everything you do, but I've upgraded all of my computers to Windows 10 Pro. Part of this was necessity, as Docker only works on Windows 10 Pro or higher, but Windows 10 itself also opens up the ability to run bash. If you are coming from Windows 7, there isn't much difference. Everything is just about the same, with a little bit of the Windows 8.1 window dressing still coming through sometimes.

Windows 10 Pro also affords me the ability to remote desktop into my machine. Yes, yes, I know that I can do that for free with macOS and Linux, but that's not the only reason to use Pro. Remote Desktop allows me to access my full desktop from my laptop or my phone. There's been a bunch of times where I'm away, get an urgent e-mail, and need to check something on our corporate VPN. I just remote desktop into my machine at home and I'm all set. This is much easier than setting up OpenVPN on my iPhone.

The main reason I run Windows 10 is Docker, which I'll outline below. The short of it is that Docker for Windows requires Hyper-V, and Hyper-V is only available on Windows 10 Pro or higher.

If you are running a PC and Windows, you should have upgraded. Nearly all the software that I had problems with works fine with Windows 10 now. Any issues I have are purely just because of how Windows handles things after I've gotten used to Linux.

The Command Line - bash and Powershell wrapped in ConEmu

Part of this hasn't changed. I still use Powershell quite a bit. I even give my Docker workshop at conferences from a Powershell instance. With the Windows 10 Anniversary Update, Powershell now works more like a traditional terminal so you can resize it! That sounds like a little thing, but being stuck to a certain column width in older versions was a pain. Copy and paste has also been much improved.

I still install git and posh-git to give the terminal experience I had using zsh and oh-my-zsh on Linux. Since Powershell has most of the GNU aliases installed for common commands, moving around is pretty easy and the switch to using Powershell shouldn't take long. Now, some things like grep don't work, so you will have to find alternatives ... or you could just use real grep using bash.

I also do all of my Docker stuff from within Powershell. The reasons for this are twofold - one is that it works fine in Powershell out of the box, and the second is that setting up Docker to work in bash is a bit of a pain.

PHP and Composer, which are daily uses for me, are also installed with their Windows variants. I do also run specific versions under Docker, but having them natively inside Powershell just saves some time. PHP just gets extracted to a directory (C:\php for me), and you just point the Composer installer to that. After that, PHP is all set to go.

The Windows Subsystem for Linux (or bash) is a must for me. This provides an environment for running Ubuntu 14.04 in a CLI environment directly inside of Windows. This is a full version of Linux, with a few very minor limitations, for running command line tools and development tools. I'm pretty familiar with Ubuntu already, so I just install things as I would in Ubuntu. I have copies of PHP, git, etc, all installed.

What I don't do is set up an entire development environment inside Ubuntu/bash, I leave that for Docker. Getting services to run like Apache can be a bit of a pain because of the networking stuff that happens between bash and the host Windows system. You can do it, I just chose not to.

I'll switch back and forth between bash and Powershell as needed.

I've also switched to using ConEmu, which is a wrapper for various Windows-based terminals. It provides an extra layer that allows things like tabs, better configuration, etc. I have it defaulted to a bash shell, but have added a keyboard shortcut to launch Powershell terminals as well. This keeps desktop clutter down while giving me some of the power that Linux/macOS-based terminals had.

Editing files in bash

One thing I don't do in bash is store my files inside of the home directory. When you install it, it sets up a directory inside C:\Users\Username\AppData\Local\Lxss\rootfs that contains the installation, and C:\Users\Username\AppData\Local\Lxss\home\username that contains your home directory. I've had issues with files being edited directly through those paths not showing up in the bash instance. For example, I don't open bash, git clone a project into ~/Projects, and then open up PHPStorm and edit the files inside those paths. I'd perform the edits inside PHPStorm, save the file, and sometimes the edits showed up, sometimes they didn't.

Instead, I always move to /mnt/c/Users/Username/ and do everything in there. bash automatically mounts your drives under /mnt, so you can get to the "Windows" file system pretty easily. I haven't had any issues since doing that.

Docker for Windows

Microsoft has done a lot of work to help Docker run on Windows. While it is not as perfect as the native Linux version, the Hyper-V version is leaps and bounds better than the old Docker Toolbox version. Hyper-V's I/O and networking layer are much faster, and other than a few little quibbles with Powershell it is just as nice to work in as on Linux. In fact, I've been running my Docker workshop from Windows 10 for the last few times with as much success as in Linux.

It does require Hyper-V to be installed, so it's still got some of the same issues as running Docker Toolbox when it comes to things like port forwarding. You can also run Windows containers, though nothing I do day-to-day requires Windows containers, so my works is all inside Linux containers.

I would suggest altering the default settings for Docker though. You will need to enable "Shared Drives," as host mounting is disabled by default. I would suggest going under "Network" and setting a fixed DNS server. This helps resolve some issues when the Docker VM decides to just stop resolving internet traffic. If you can spare it, go under "Advanced" and bump up the RAM as well. I have 20 gigabytes of RAM on my desktop so I bump it up to 6 gigs, but my laptop works fine at the default 2 gigabytes.

All of my Docker work is done through Powershell, as the Docker client sets up Powershell by default. You could get this working under Bash as well by installing the Linux Docker Client (not the engine), and pointing it to the Hyper-V instance, but I find that's much more of a pain than just opening a Powershell window.

I run all of my services through Docker, so Apache, MySQL, etc, are all inside containers. I don't run any servers from the Windows Subsystem for Linux.

PhpStorm and Sublime Text

Nothing here has changed since 2015. PhpStorm and Sublime Text 3 are my go-to editors. PhpStorm is still the best IDE I think I've ever used, and Sublime Text is an awesome text editor with very good large file support.

What I'm Not Using Anymore

A few things have changed. I've switched to using IRCCloud instead of running my own IRC bouncer. It provides logging and excellent mobile apps for iOS and Android. It is browser-based and can eat memory if the tab is left open for days, but it saves me running a $5 server on Digital Ocean that I have to maintain.

puTTY, while awesome, is completely replaced for me with Powershell and Bash. Likewise, cygwin is dead to me now that I have proper Linux tools inside Bash.

I've also pretty much dropped Vagrant. At my day job we have to run software that isn't compatible with Virtualbox, and Docker on Windows works just fine now. I don't even have Vagrant installed on any of my machines anymore.

It's a Breeze

Developing PHP on Windows is nearly as nice as developing on Linux or macOS. I'd go so far as to say that I don't have a good use for my Macbook Pro anymore, other than some audio stuff I do where I need a portable machine. I'm as comfortable working in Windows as I was when I was running Ubuntu or ArchLinux, even though I'd much prefer running a free/libre operating system. I've got to make money though, so I'll stick with Windows for the while.

tl;dr

Here's what I use:

My Docker Setup

When it comes to Docker, I use Docker Compose to set up and link all of my containers together. It's rare that I have a single container, though many of my Sculpin-based sites live quite comfortably inside of an nginx container, but even those take advantage of volumes. For a basic three-tiered application, I start off with this basic docker-compose.yml file:

# docker-compose.dev.yml
version: '2'

volumes:
  mysqldata:
    driver: local

services:
  nginx:
    image: nginx
    volumes:
      - ./:/var/www:ro
      - ./app/nginx/default.conf:/etc/nginx/conf.d/default.conf
    links:
      - phpserver

  phpserver:
    build:
      context: ./
      dockerfile: ./phpserver.dockerfile
    working_dir: /var/www/public
    volumes:
      - ./:/var/www/
    links:
      - mysqlserver

  mysqlserver:
    image: mysql
    environment:
      MYSQL_DATABASE: my_db
      MYSQL_USER: my_db
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: rootpassword
    volumes:
      - mysqldata:/var/lib/mysql

  composer:
    entrypoint: /bin/true
    build:
      context: ./
      dockerfile: ./composer.dockerfile
    volumes:
      - ./:/app

I tend to use the stock nginx image supplied on the Docker Hub, as well as the official MySQL image. Both of these tend to work out of the box without much extra configuration other than mounting some config files, like I do above for nginx.

Most of my PHP projects tend to need extensions, so I use the following Dockerfile for PHP:

FROM php:fpm

RUN docker-php-ext-install pdo pdo_mysql

COPY ./ /var/www

It uses the stock FPM tag supplied by the PHP image, and I generally use the full-sized version of the images. They do make available images built off of Alpine Linux which are much smaller, but I've had issues with trying to build some extensions. I also have a COPY command here because this is the same Dockerfile I use for production, on development this is a wasted operation.

The other thing I do is define a service for Composer, the package manager for PHP. The Dockerfile for it mirrors the one for PHP, except it is built using the composer/composer image and it doesn't copy any files into itself as it never goes into production.

FROM composer/composer

RUN docker-php-ext-install pdo pdo_mysql

As is pretty standard, nginx links to PHP, and PHP links to MySQL.

With a docker-compose -f docker-compose.dev.yml up -d I can have my environment build itself and be all ready to go.

Why the Composer Service?

I'm a big fan of containerizing commands, as it reduces the amount of stuff I have installed on my host machine. As Composer is a part of my workflow, which I'll go over more in a minute, I build a custom image specific to this project with all the needed extensions. Without doing this, I will have to run composer either from my host machine directly, which can cause issues with missing extensions, PHP version mismatches, etc, or I have to run composer with the --ignore-platform-reqs flag, which can introduce dependency problems with extensions.

Building my own image makes it simple to script a custom, working Composer container per project.

The entrypoint: /bin/true line is there just to make the container that Docker Compose creates exit right away, as there is not currently a way to have Compose build an image but not attempt to run it.

The other thing you can do is download the PHAR package of composer, and run it using the PHP image generated by the project.

Custom Functions

I hate typing, so I have a few shell functions that make working with my toolchain a bit easier. I use both a Mac and ArchLinux, so I standardized on using the zsh shell. This makes it easier to move my shell scripts from one machine to another. Since I tend to run the PHP and Composer commands regularly, I have two functions I define in zsh that look to see if there is an image available for the project I'm in, otherwise they default to stock images:

# ~/.zshrc
function docker-php() {
    appname=$(basename `pwd -P`)
    appname="${appname/-/}"
    imagename='php:cli'
    output=$(docker images | grep "${appname}_phpserver")
    if [ "$?" = "0" ]; then
        imagename="${appname}_phpserver"
    fi
    docker run -ti --rm -v $(pwd):/app -w /app $imagename php $*
}

function docker-composer() {
    appname=$(basename `pwd -P`)
    appname="${appname/-/}"
    imagename='composer/composer'
    output=$(docker images | grep "${appname}_composer")
    if [ "$?" = "0" ]; then
        imagename="${appname}_composer"
    fi
    docker run --rm -v ~/.composer:/root/.composer -v $(pwd):/app -v ~/.ssh:/root/.ssh $imagename $*
}

I can now run docker-php to invoke a PHP CLI command that uses a projects phpserver image, and docker-composer to do the same with Composer. I could clean these up, and probably will in the future, but for now they get the job done.

A General Workflow

By using Docker Compose and the custom functions, I'm pretty well set. I copy all of these files into a new directory, run my docker-composer command to start requiring libraries, and I'm all set. If I need to use a skeleton project I will just create it in a sub-folder of my project and move everything up one level.

For applications that are being built against one specific version of PHP, I end here, and I run my unit tests using the docker-php function that I have defined. If I need to have multiple versions of PHP to test against, I'll make stub services like I did with the composer service.

Any custom commands above and beyond this get bash scripts in the project.

Deployment

Deployment is always done on a project-by-project basis. I tend to package up the application in one image for the most part, and then rebuild the application using the new images. How I do that depends on the actual build process being used, but it is a combination of using the above Dockerfiles for PHP and/or Docker Compose and stacking config files with -f.

I skirt the whole dependency issue with Composer by normally running it with --ignore-platform-reqs on the build server, mostly so I don't clog the build server with more images than I need, and so that I don't have to install any more extensions than needed on the build server.

Either way, the entire application is packaged in a single image for deployment.

Back on December 10th, I launched by first book, Docker for Developers, on Leanpub. One of the things that I kind of glossed over, mostly because it wasn't the focus of the book, was at the the beginning of the "Containerizing Your Application" chapter. It was this:

Modern PHP applications do not generally tote around their vendor/ directory and instead rely on Composer to do our dependency injection. Let’s pull down the dependencies for the project.

$ docker run --rm -u $UID -v `pwd`:/app composer/composer install

This first initial run will download the image as you probably do not have this composer/composer image installed. This container will mount our project code, parse the composer.lock, and install our dependencies just like if we ran composer locally. The only difference is we wrapped the command in a docker container which we know has PHP and all the correct libraries pre-installed to run composer.

There's something very powerful in there that I'm not sure many people take away from the book. I spend most of my time showing how Docker is used and how to get your application into it, and the book answers the question which many people have at the outset - how do I get my application into Docker?

One thing many people overlook is that Docker is not just a container for servers or other long-running apps, but it is a container for any command you want to run. When you get down to it, that is all Docker is doing, just running a single command (well, if done the Docker way). Most people just focus on long running executables like servers.

Any sort of binary can generally be pushed into a container and since Docker can mount your host file system you can start to containerize any binary executable. In the Composer command above I've gotten away from having a dedicated Composer command, or even phar, on my development machines and just use the Dockerized version.

Why?

Less maintenance and thinking.

Docker has become a standard part of my everyday workflow now even if the project I'm working on isn't running inside of a Docker container. I no longer have to install anything more than Docker to get my development tools I need. Let's take Composer for example.

Putting Composer in a Container

Taking a look at Composer, it is just a phar file that can be downloaded from the internet. It requires PHP with a few extensions installed.

Let's make a basic Dockerfile and see how that works:

FROM php:7

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

ENTRYPOINT ["composer"]
CMD ["--version"]

We can then build it with the following:

docker build -t composer .

I should then be able to run the following and get the Composer version:

docker run -ti --rm composer

Great! There's a problem though. Go ahead and try to install a few things, and eventually you'll get an error stating that the zip extension isn't installed. We need to install and enable it through the docker-php-ext-* commands available in the base image. It has some dependencies so we will install those through apt as well.

FROM php:7

RUN apt-get update && \
  DEBIAN_FRONTEND=noninteractive apt-get install -y \
    libfreetype6-dev \
    libjpeg62-turbo-dev \
    libmcrypt-dev \
    libpng12-dev \
    libbz2-dev \
    php-pear \
    curl \
    git \
    subversion \
  && rm -r /var/lib/apt/lists/*

RUN docker-php-ext-install zip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

ENTRYPOINT ["composer"]
CMD ["--version"]

Now rebuild the image and try again. It will probably work. You won't have a vendor directory, but the command won't fail anymore. We need to mount our directory inside of the container, which brings us back to the original command:

docker run --rm -u $UID -v `pwd`:/app composer/composer install

That is a lot of stuff to type out, especially compared to just composer. Through the beauty of most CLI-based operating systems you can create Aliases though. Aliases allow you to type short commands that are expanded out into much longer commands. In my ~/.zshrc file (though you might have a ~/.bashrc or ~/.profile or something similar) we can create a new alias:

alias composer="docker run --rm -u $UID -v $PWD:/app composer"

Now I can simply type composer anywhere from the command line and my composer image will kick up.

A better version can be found in the Dockerfile for the PHP base image of composer/composer on Github, which I based the above on. In fact, I don't build my own Composer image, I use the existing one at https://hub.docker.com/r/composer/composer/ since I don't have to maintain it.

It isn't just PHP stuff

Earlier today I sent out the following tweet after getting frustrated with running Grunt inside of Virtualbox.

It is a pain because some of Grunt's functionality relies on the filesystem notifying that a file has changed, and when Grunt runs inside of a virtual machine and is watching a mounted folder (be it NFS or anything else other than rsync) it can take up to 30 seconds for the notify signal to bubble up. That makes some slow development.

I hate polluting my work machine with development tools. I had a few people say they would love having Grunt and Bower inside of a Docker container, so I did just that.

I created a new container called dragonmantank/nodejs-grunt-bower and pushed it up as a public repository on the Docker Hub.

Since these images are pre-built I don't have to worry about any dependencies they might need, and setting up a new machine for these tools now is down to installing Docker (which is going to happen for me anyway) and setting up the following aliases:

alias composer="docker run --rm -u $UID -v $PWD:/app composer/composer"
alias node="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower node"
alias grunt="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower grunt"
alias npm="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower npm"
alias bower="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower bower"

The first time I run one of the commands the image is automatically downloaded so I don't even have to do anything other than just run the command I want.

Start Thinking about Dockerizing Commands

Don't think that Docker is only about running servers or daemons. Any binary can generally be put inside of a container, and you might as well make your life easier by making your tools easier to install and maintain.