Chris Tankersley's Blog

The PHP Community, And My Thanks


When I started working with PHP, it was because I was fed up with Perl. I had been building HTML websites for quite a while at that point, and I had already been interested in programming in general. Perl worked, but as anyone who has done work with Perl knows that the syntax is, well... it's Perl. I was working at a local ISP at the time and one of my coworkers, Matt Wiseman pointed me to PHP, so I started to look into it. I liked what I saw.

Like many developers I worked in a bubble for many years after that. I stuck around at the ISP for about two more years before moving on to a job that was dedicated to web development full-time (well, mostly). While it was an insurance company, they were one of the few smaller ones that had a dedicated development staff. Insurance has a history of continuing education, so one of the perks was attending training. Northwest Ohio itself is pretty technologically devoid, so I had to look around. I attended a SANSFire conference in 2007 since we used SANS for much of our security training, and LinuxFest Ohio that year as well.

In 2008 I discovered php|tek. I had never attended a conference, and I'll admit that I did php|tek 2008 wrong, just like I did SANSFire and LinuxFest Ohio. I went to the sessions, made small talk with people, but didn't really interact. I didn't know these people. I didn't talk to anyone on IRC, Twitter wasn't really a thing, and at best I matched names up to some blogs that I read.

I had a good time, don't get me wrong. I learned a lot. I attended Zendcon that year as well, and I decided to not spend as much time in my room. I met more people, with one of them being Michelangelo van Dam, not suprisingly. I'd started a programming blog, and I had began to find more people in the PHP community through Twitter and IRC.

I went back to tek the next year. There was Michelangelo again. Elizabeth Naramore, whom I had never met in real life, recognized my IRC handle from #php despite me doing nothing more than lurking there. There were other people I'd seen at Zendcon. I met more people. I got introduced to the #phpc channel on freenode, and I started following more people on Twitter.

I had unknowingly taken the advice I'd hear five years later from Yitzchok Willroth/coderabbi, which was "Do not separate from the community."

At Zendcon 2009 I was invited to my first non-official party, held by Microsoft. All because I wanted a shirt, and had managed to ask Josh Holmes for one. Michelangelo talked me into submitting to conferences. When tek rolled around in 2010 I don't think I submitted. I didn't feel like I was ready. That was actually a good thing, because my employer decided to cancel my trip to tek so I could travel Ohio and present some stuff we were working on. I started submitting with Zendcon 2010 (I think).

I didn't get accepted. I attended though, and met more people, and tried to be a bigger part of the community. I continued to blog and network with people at conferences because there wasn't anyone locally. I could already see myself becoming a better programmer because I had surrounded myself with people smarter than myself, and people that faced the same, yet different problems. I could bounce ideas off of people. I was exposed to what other people were doing, talking about, and working with.

I grew as a developer.

In 2011 I didn't get accepted to php|tek, but I did get asked to write an article by Elizabeth Tucker Long for the php|architect magazine. I really enjoyed working with her, and have been writing for the magazine ever since, and working with her on other things, but it was through the community that I began to write slightly more professionally.

It took me two years to get accepted as a speaker, but Kevin Schroeder liked one of my talks so I got to speak at Zendcon 2012. That helped boost my confidence immensely, and I also caught the speaking bug.

Since then, I've attended many more conferences. I've met many great people. I've met lots of people that I consider my friends. Going to conferences and being part of the PHP community led me to get a contracting position with The Brick Factory in Washington, D.C., because I had met John Bafford at tek one year, followed him on Twitter, and noticed him post a job. I'm assuming he didn't actually look at my Github profile before he offered me the job.

A few years later another member of the community, Chuck Reeves and I met and have become friends, and it's through meeting people through the community and conferences that this month I'm moving on from The Brick Factory to working with him. I'm chalking that career move to the delirium caused by standing in line for 8 hours waiting for hotdogs with Chuck, Jeremy Mikola, Drunk Phil Sturgeon, Sober Phil Sturgeon, Daniel Cousineau, Matt Frost, David Buchmann and Sammy Kaye Powers though. #wurstcon was a trip I would have only made with the PHP Community.

At php[world] 2014 I got to watch as new people were brought into the PHP community, all because they were standing outside when we were figuring out cars for #koshercon (and that whole inclusive, very friendly conference thing put on by php[architect], that probably helped as well).

Growing from a lone developer at a small Ohio insurance company to the developer that I am today, the PHP community has been there, helped me, and provided opportunities I never would have gotten otherwise.

I've mentioned only a fraction of the names above of the people that are a part of the community, and it would be impossible for me to list all of them. I'd thought about it, but it's easier for me to just say look at the people I follow on Twitter, and see the people that hang around in #phpmentoring, #coderabbi, and #phpc on freenode.

If you are a PHP developer, make sure you are a part of the PHP community at large. Attend conferences, attend a local user group. Meet people. Find a group standing around talking, and walk up to them. Say hi. Join an IRC channel.

In closing, I just want to say Thank You to the PHP Community at large. I wouldn't be here without you.

php[world] 2014 Retrospective


Once again I find myself sitting in the airport with way to much time on my hands, which means that now is a good time to write down my thoughts on another great conference.

php[world] was a brand new conference put on by php[architect], which is the same team that puts together php[tek]. Eli, Heather, Sandy, Oscar, and Kevin are well versed in putting together conferences and it once again showed with php[world].

The venue was the same idea as php[tek], with the Sheraton Premiere serving as the location for [world]. It is close enough to get around to different places with minimal effort, but sometimes that minimal effort is more than enough to keep pepole at the hotel. All but two of the rooms sat aorund a large lobby area that was heavily used as a place for people to interact, and it worked well. Just like with [tek] it was easy to find people to talk to, which means the hallway track was varied and just as good as most conferences.

It's no suprise that the php[architect] crew picked a great speaker list. They managed to bring together people from many different PHP ecosystems into a single conference were ideas were shared amongst all of the projects. The ending keynote featured representatives from Drupal, Wordpress, Joomla, Magento, Symfony 2, Laravel, and Zend Framework. There was no shortage of good talks to choose from.

I also met a few new faces, and it's always a wonderful feeling when someone new to a conference gets to interact with the larger PHP community. I also got to meet up with many faces and friends from previous conferences. Ideas were exchanged, thoughts are formulated, and I leave with many different things that I want to do.

The highlight of the conference was a talk by Yitzchok Willroth, better known as @coderabbi. He gave an excellent talk on becoming a better developer by utilizing some of the thing he has learned from his faith and how to apply them to programming. Every single teaching had a message, and every developer in the room could take away at least one.

Overall, this is another conference I'm looking forward to attending next year. php[architect] hit it out of the park, and now I can't wait to see what they have in store for php[tek].

Docker, A Misunderstood Tool


The last few years of development, at least in the PHP world, has been dominated by the idea of quickly and easily duplicating development environments. It is a great problem that a lot of developers, especially PHP, have to face when it comes to building tools to work on specific systems. While it is better than the days of not being sure if you were running PHP 5.1, 5.2, or 5.3 (though that is still a big issue for some people), our web applications are becoming more than a few collective PHP scripts being thrown up on a server. Not only do we need to contend with issues of developer stacks, but we need to make sure that the environments match closely in development as they do production.

The go-to tool for many people is Vagrant, which is a wrapper around the Virtualbox virtualization software from Oracle. Vagrant makes it very easy to create and destroy virtual machines and configure them with a text file instead of having to fiddle around with a GUI interface. Vagrant also coupled with Puppet, a configuration management tool, so that you could have a virtual machine start up and automatically provision itself. You could use the same puppet manifests on your virtual machines as you did in production, or start to use Puppet in production based on your development manifests. Non-techies could easily download a few files from a repository, type vagrant up, and have a full stack ready to go with all the correct software and tools needed. I've moved more than one group to using Vagrant with great success.

The thing is, virtualization is not a new tool. I know I've played around with it since around 2000 when VMWare released their Workstation software. Vagrant made everything simpler for the average developer and removed the barrier of needing to know how to set up a server. You have even got sites like Puphpet and Phansible that will build a Vagrant set for you.

Virtualization has a huge downside though - resources. Virtualization takes an entire computer and attempts to emulate it. Virtualization takes up a lot of resources, even with modern technologies from Intel and AMD and better software from Oracle, VMWare, or from the devs working on KVM/Qemu. There is also a huge performance hit you take since many parts of the computer are being virtualized, like hard drives and networking. This a trade-off for having an easily reproducible stack.

Docker, The Old New Kid on the Block

Last year a new software burst onto the scene, named Docker. Docker, much like Vagrant, is a wrapper around another technology that was a bit harder to use - LXC (LinuX Containers). What LXC, and other that existed before it like BSD Jailes or Solaris Containers, do is replicate the operating system and not the underlying hardware. This means that while the container looks like a full computer, it isn't. It's sharing it's host resources directly while providing a "virtual" Operating System. The OS is generally some flavor of Linux since that is what LXC was built to provide.

There is a big difference though. Where Vagrant/Virtualbox emulate an entire PC, LXC simply runs the OS inside of another OS. And really it's not even doing that - it's running processes inside of a container, so it's not even a full operating system being run. You aren't booting Ubuntu inside of LXC (though you can), you are simply providing an OS environment to a process. This makes it much, much, much more lightweight than virtualization.

Like virtualization, containerization is not anything really new. Solaris had it back in 2004 with Solaris 10, and FreeBSD has had jails for I want to say longer. LXC is relatively new by itself, and Docker made LXC much easier to work with by providing a great wrapper aroud it.

A Quick Overview

There are plenty of blog posts out there about getting started with Docker, so I'll skip that for the most part and get to the powerful part. One of Docker's features is providing an easy-to-use and easy-to-find list of base images available at These images are generally everything needed to run something like, say, NodeJS or nginx or MySQL. Want to fire up a temporary MySQL server without installing MySQL?

docker run -d -p 3306:3306 --name mysql -e MYSQL_ROOT_PASSWORD=password mysql

Docker will download the MySQL image, build it quickly, and start the MySQL server. It will expose port 3306 on the local machine so your apps can use localhost to connect to it. You can then start and stop it as needed, and the data will persist until you destroy the container. The best part is that resource hit isn't much more than running MySQL normally, and it won't install anything on your local machine.

You can then link things together. Let's say you have a PHP app inside of a Docker container that boots nginx and php-fpm and want it to talk to the MySQL container we just started:

docker run -d -p 80:80 --name webapp -link mysql:mysql php:5.5

Now your PHP container can see the MySQL container by using 'mysql' as a hostname.

You don't even have to use the existing containers. You can create Dockerfiles (think Puppet manifest files + Vagrant files) that will set up an entire system for you:

FROM phusion/baseimage:0.9.10

ENV HOME /root

RUN /etc/my_init.d/

CMD ["/sbin/my_init"]

# Nginx-PHP Installation
RUN apt-get update
RUN apt-get install -y vim git curl wget build-essential python-software-properties\
               php5-cli php5-fpm php5-mysql php5-pgsql php5-sqlite php5-curl\
               php5-gd php5-mcrypt php5-intl php5-imap php5-tidy mysql-client

RUN sed -i "s/;date.timezone =.*/date.timezone = UTC/" /etc/php5/fpm/php.ini
RUN sed -i "s/;date.timezone =.*/date.timezone = UTC/" /etc/php5/cli/php.ini

RUN sed -i "s/upload_max_filesize =.*/upload_max_filesize = 250M/" /etc/php5/fpm/php.ini
RUN sed -i "s/post_max_size =.*/post_max_size = 250M/" /etc/php5/fpm/php.ini

RUN apt-get install -y nginx

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN sed -i -e "s/;daemonize\s*=\s*yes/daemonize = no/g" /etc/php5/fpm/php-fpm.conf
RUN sed -i "s/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/" /etc/php5/fpm/php.ini

RUN mkdir           /var/www
ADD build/default   /etc/nginx/sites-available/default
RUN mkdir           /etc/service/nginx
ADD build/  /etc/service/nginx/run
RUN chmod +x        /etc/service/nginx/run
RUN mkdir           /etc/service/phpfpm
ADD build/ /etc/service/phpfpm/run
RUN chmod +x        /etc/service/phpfpm/run

EXPOSE 80 22
# End Nginx-PHP

VOLUME /var/www
VOLUME /etc/nginx
VOLUME /etc/php/
VOLUME /var/log

RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

I'm glossing over a ton of stuff, and I really recommend looking through the Docker documentation. It is incredibly well written and easy to follow.

Why Should You Use Docker?

Docker is a great tool for containing processes. I'm currently using it for hosting of many of my projects because I know that every time I build the container, it will be 100% the same as the last time. If you are familiar with basic system administration (and if you aren't, tell your local conference to let me give my Sysadmin talk :)) you can see that the Dockerfile is basically a list of shell commands to run, with some extra sugar for ports and such.

This isn't much different than Puppet and Vagrant. At my current job we use Vagrant so that every person has the correct version of PHP and the same settings across the board so we can avoid the "It works on my machine" debugging. In this use case, Docker and Vagrant/Puppet are almost identical use cases.

Docker shines in the container aspect. In my above Dockerfile, there is no MySQL installed on that system. I can link to an external container, or simply point it to a standalone server, and it acts just like any other PHP process. What I can do though is build my app using Docker, and push it up to the server and build the image and it will work. I can't do that with Puppet/Vagrant because I don't run Virtualbox on my webserver. Yes, I can use Puppet to enforce configuration, but it's not exactly the same as production.

If I want to separate out my processes I can do that as well. I am no longer running an entire virtual machine to run a single website, I can run MySQL, nginx, and PHP in different containers with much less overhead. Want to test a different version of PHP? Boot up a different Docker container.

In my specific use case, my build system for my web apps will build a Docker container using the specified version of PHP, or without PHP. If I need to move it, a few commands to quickly back up, copy, and restore the container are all that is needed. The base server only has Docker and ssh installed, everything else is contained.

Docker is best used in situations where you need to package up something to distribute in a controlled way, not build a virtual machine to contain your application.

Why You Shouldn't Use Docker

Docker is not a replacement for virtualization, and they make that very clear. That doesn't stop people from equating it to virtualization though. If you need to virtualize an entire computer, use virtualization. If your shop runs VMWare ESXi, by all means use VMWare Workstation to build VMs and push them to ESXi. If you use HyperV and your devs like HyperV, use HyperV.

Don't treat Docker like a virtualization system. It's not. It's a containment system for keeping processes out of each other's face. MySQL doesn't need to worry about the PHP configuration, and nginx doesn't need to worry about MySQL.

Docker is also a Linux-based tool. Yes, there are things like boot2docker which will boot a virtual machine that will run Docker inside of it, faking a Docker environment on OSX, and to an extend Windows, but the added complexity of running Docker inside of a VM introduces its own special set of problems that you won't run into in production. Mounting volumes with boot2docker is an incredible pain. Docker really shines in a pure Linux environment.

Many people also treat the containers as entire operating systems and want things like FTP or SSH inside of them. That's not what the spirit of Docker is about, though you can easily start an SSH shell inside of a container. Docker is more about separating out the concerns of the parts of your application, or providing an environment for your application to live and run in.

If you don't plan on running Docker in production, I have a hard time recommending it. It's not that Docker isn't awesome, as I'm building great things with it, but it's a tool that solves a deployment issue more than a development issue.

Docker is Awesome, But Not a Silver Bullet

I love Docker. I build a bunch of tools around it and it's made my sysadmin life much easier. I have it installed on all my Linux boxes. It's not going to solve any development problems you have though, and if you aren't running Linux I would honestly skip it entirely. boot2docker is a decent stopgap for things until you need to really build complex systems.

If you are interested in using Docker for development, check out the documentation. Spend some time reading up on it and how to use it before saying "I have to use Docker for everything, it's the new hotness!" Chances are it won't solve your development problems, but it can be a great tool in your development toolbox.