<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title><![CDATA[Chris Tankersley]]></title>
    <link href="/atom.xml" rel="self"/>
    <link href="/"/>
    <updated>2019-09-02T02:12:34+00:00</updated>
    <id>/</id>
        <generator uri="http://sculpin.io/">Sculpin</generator>
            <entry>
            <title type="html"><![CDATA[The False Promise of LTS Releases]]></title>
            <link href="/2019/09/01/the-false-promise-of-lts-releases/"/>
            <updated>2019-09-01T00:00:00+00:00</updated>
            <id>/2019/09/01/the-false-promise-of-lts-releases/</id>
            <content type="html"><![CDATA[<p>On August 30th, 2019, <a href="https://twitter.com/saramg">Sara Golemon (@saraMG)</a> tweeted out that developers on PHP 7.2 should start planning on their upgrade path to 7.3 or 7.4 since it was about to go into "security-only" mode, which means only security-related patches would be issued for it. If you were on 7.1, it was about to be End-Of-Life'd, which means 7.1 will receive <em>no</em> further patches.</p>

<p>As a "hot take" to this, <a href="https://twitter.com/syntaxseed">Sherri W. (@SyntaxSeed)</a> responded with:</p>

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Hot take: <a href="https://twitter.com/hashtag/PHP?src=hash&amp;ref_src=twsrc%5Etfw">#PHP</a>&#39;s release cycle is too fast. 😰 <a href="https://t.co/YfvBC7yGRa">https://t.co/YfvBC7yGRa</a></p>&mdash; SyntaxSeed (Sherri W) (@SyntaxSeed) <a href="https://twitter.com/SyntaxSeed/status/1167780014001139714?ref_src=twsrc%5Etfw">August 31, 2019</a></blockquote>

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>I responded to this with my own thoughts:</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The alternative is too slow :(<br><br>Right now it&#39;s roughly 12 months. Before 5.4 it swung between 6 months to nearly 3 years, depending on the release. At least now it is consistent. <br><br>Not that anyone ever upgrades anyway. Java and Python devs stay on outdated versions all the time.</p>&mdash; Chris Tankersley (@dragonmantank) <a href="https://twitter.com/dragonmantank/status/1167830653678817283?ref_src=twsrc%5Etfw">August 31, 2019</a></blockquote>

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>From there other people joined into a bit of discourse over whether or not a long or short release cycle helps developers. Developers weighed in on both sides.</p>

<h2 id="the-arguments-for-an-lts-release-cycle">The Arguments For an LTS Release Cycle</h2>

<h3 id="clients-won%27t-pay-for-upgrades">Clients Won't Pay for Upgrades</h3>

<p>From Sherri's perspective as a freelancer with 20-30 clients, it is hard to get a client to pay money just because the underlying language has upgraded. We already have problems trying to justify why we should pay for testing, so coming back to a client a year or two after a project is finished just to pay for a non-functionality-adding upgrading can be a hard sell.</p>

<p>I understand the reasoning. I had two clients that were on PHP 5.2 for a very, very long time. When I say "long time," I mean PHP 5.2 had been released in 2006, and these projects were from still in use well into PHP 5.5's release.</p>

<p>The first was a small Bed and Breakfast site that was written into Wordpress. The reservation system that they used was source encrypted with <a href="https://www.ioncube.com/">IonCube</a>, a source encryption extension for PHP. The client refused to pay for an upgrade for this plugin, and since it was wrapped in IonCube we could not manually upgrade it. It refused to work on PHP 5.3 or anything higher. I and the original contractor who worked with her could not get it to work.</p>

<p>The second project was a local government project. They had a loaned server that had been paid with through donations that ran Windows 2000 and was hosted at a local library. Since it had been paid and maintained through donations, it was locked to this hardware. The library would only support the machine if it worked with the AD controller. That left us on Windows 2000.</p>

<p>This was around what would be the end of PHP 5.3's life. When 5.4 was released I contacted them about upgrading, especially because Zend Framework 1 was well outdated as well. There was no money for an upgrade at the time.</p>

<p>In both cases, it was a business decision motivated by money that these pieces of software stay at 5.2. They both stayed at 5.2 for a very, very long time.</p>

<p>From Sherri's tweets, she is in much the same boat - many customers just do not want to pay for arbitrary upgrades for infrastructure. You could try and bundle it with new features, but then they may balk at the cost and still decline the project. If releases were slower, they could be tied to more major upgrades.</p>

<h3 id="business-can-move-slow">Business Can Move Slow</h3>

<p><a href="@suckup_de">Lars Moelleken (@suckup_de)</a> mentioned that sometimes business processes move slower than release cycles. This means that businesses that need stability look toward an LTS release to provide that stability while the business can still provide value for the life of a project.</p>

<p>I have seen this as well. One project I worked on had many different requirements, which included the list of allowed operating systems and software requirements. We had to work with a series of hardware that only worked with specific Linux kernels, and some distributions shipped with versions of libraries we needed (specifically a version of OpenSSL that had some hardening patches applied).</p>

<p>We also had to be very cognizant of changes to the codebase. We had to be careful not to break anything, as loss of functionality could have some very bad consequences for our uses. Getting patches installed for bugs was done in months, not days.</p>

<p>This meant that many of our software stayed on older versions of languages or libraries. Python and PHP were both well out of EOL when I started at the company. When I left, at least PHP was at 5.6, and Python had started a crawl toward Python 3. The underlying OS had not changed, but because the distro did not support rolling upgrades we could do little for in-place upgrades.</p>

<p>We had planned on upgrading all of this, but much of it was tied to a sales cycle and maintenance timeframes. We would have to maintain two versions of the software which was an additional cost for us. It was decided that we would try and upgrade what we could when we had time, and try to push the customer to a new sales cycle which would allow us to switch them over.</p>

<h3 id="we-just-can%27t">We Just Can't</h3>

<p>This argument came up during our weekly group get-togethers where we just have a video call and hang out for an hour. The main focus had been a coworker who used to work for an insurance company that did most of their work in Java 6.</p>

<p>When he started, he wanted to use some newer best practices and libraries that would have made their lives easier. There was a lot of pushback from doing this from various sides.</p>

<p>Many of the arguments revolved around either that there was not enough time, or previous consultants had already decided that "Solution X" was a bad fit for the company. A few developers mentioned that some of the new things would just never work for the current solution due to "technical restraints."</p>

<p>In this case, the development team decided that it would be too much work to push forward with an upgrade. Java 6 still worked, so it was better to just continue to deliver functionality with the current setup. Maybe new projects could allower newer setups.</p>

<h2 id="why-lts-is-bad">Why LTS is Bad</h2>

<p>In all three of the above excuses, actual reasons had been put forth as to why a slower release cycle would be better.</p>

<p>All of them are completely invalid and just excuses for not doing work. I understand the <em>why</em> of each argument. I just do not accept them because, in the long run, they are just causing more work and more pain in the upgrade process. This makes it even hard to justify upgrades because they will cost more and take more time, and are much more prone to failure.</p>

<h3 id="if-there-isn%27t-money-now%2C-there-won%27t-be-in-the-future">If There Isn't Money Now, There Won't Be In The Future</h3>

<p>In the Bed and Breakfast case, she ended up paying for a server all to herself running PHP 5.2 and an older operating system. This also required her to sign off on a security waiver stating that she understood. I am not 100 percent sure she really did understand, otherwise, she could have paid for the new version of the plugin. As a contractor, I had to protect myself.</p>

<p>By the time I had stopped consulting for her, the plugin was not even maintained anymore - she would have to pay for an entirely revamped reservation system. The cost went from what I think was $75 (at the time) for the plugin upgrade to nearly $2,000 to just replicate what the old plugin did.</p>

<p>The Zend Framework 1 application is still in use. I just checked and it was moved to a host running PHP 7.0. The project was never upgraded. In fact, I know this because there were some workarounds I had to do to get Zend Framework to run under Window 2000's version of IIS. The site is now running under Apache httpd, according to the headers, but still has those workarounds. They just moved it. It was a simple application with no private data so I am not really worried from a security standpoint, but no one has bothered to upgrade it.</p>

<p>They have contacted me on-and-off through the years about doing upgrades, but each time the cost is bundled with an upgrade to something newer, and well outside of the price range they want to pay for changes.</p>

<p>If a framework, OS distribution, or language has an LTS release, this increases the length of time between supported releases. This adds additional complexity to upgrades, which increases costs. The increased cost and time are usually seen as a waste because of no new tangible benefit from the increase. Why pay for something that does not add new features or revenue?</p>

<p>Frameworks like Symfony do a good job of having their final releases in a version be somewhat compatible with the new version, making upgrades easier. Even with 3.4 being LTS, the next LTS is 4.4... which if developers are not upgrading now, that means they are waiting for the next LTS, which is going to take longer and therefore cost more, to implement.</p>

<h3 id="if-your-business-moves-at-a-glacial-pace%2C-that%27s-your-fault">If Your Business Moves at a Glacial Pace, That's Your Fault</h3>

<p>Saying that a business moves slowly, and therefore release cycles should move slowly, is a farce. I never accept this is a good answer. In fact, Sara Golemon can back this up:</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">5.6 got extended support. Rather than it giving those users more time to transition, it just gave those users more time to get stuck behind an ever more daunting wall of upgrades.</p>&mdash; SaraMG (@SaraMG) <a href="https://twitter.com/SaraMG/status/1167848528699432966?ref_src=twsrc%5Etfw">August 31, 2019</a></blockquote>

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>Much like putting off an upgrade because there is no money in the budget, purposefully putting off upgrades leads to the <em>exact same problem</em> - you push an upgrade off until the point it's painful, and the amount of time and money has now increased. Going for Symfony 3.4 to 5.x will not be straightforward. Moving from Ubuntu 14.04 to 18.04 will cause a lot of things to break.</p>

<p>You now are forced to spend more money and time than if you had just kept up with upgrades and updates. Rewriting software from scratch is more expensive than refactoring.</p>

<p>I fought very hard to move to PHP 7 and Python 3 on the one project, and to upgrade the underlying OS. During my tenure, we went from 5.4 to 5.6 with nothing but package upgrades and got those into production without the clients ever noticing.</p>

<p>We did actually get the PHP 5.6 to 7.2 code migration finished (just not put into production) by the time I left. Since my predecessor and myself took great care to use best practices, the actual number of things that <a href="https://github.com/phpstan/phpstan">phpstan</a> found were fixed in a few hours. Unit tests were added around them to make sure nothing broke.</p>

<p>The Python code was a mess and primarily 2.6, so it was mostly a lost cause. A rewrite was started that included tests upfront. It was not completed when I left, but it was light-years ahead of where the 2.6 code was. The only problem was it meant pulling our lead Python developer off for a few months to do the work, pushing back a release.</p>

<p>I cannot find the tweet for the life of me, but someone brought up the longevity of developers. Since job movement is fairly frequent in our industry, leaving an upgrade for two to three years can mean the loss of knowledge that is required for these upgrades to go smoothly.</p>

<p>By saying that your business processes move slow, and accepting that, you are only making it harder on yourself, or the people that come after you. You are costing your company more money in the long run.</p>

<h3 id="you-can%2C-you-just-don%27t-want-to">You Can, You Just Don't Want To</h3>

<p>This is usually where most developers end up when it comes to legacy code. The code is in such bad shape that it is hard to fix, so there is an unconscious bias to upgrading it. They are worried about having to go to their boss and explain why they need to upgrade and are afraid of being shot down. It is easier to just stay the course and develop features.</p>

<p>As with most of these situations, you are just delaying the inevitable. You are going to have to upgrade someday, and you do not want that someday to be when a massive CVE comes out of nowhere that you have to handle.</p>

<p><a href="https://twitter.com/ramsey">Ben Ramsey (@ramsey)</a> does bring up a good point:</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The major Linux distros all maintain what are effectively LTS versions of PHP. They are official to the respective distros, just not to the PHP project itself. The distro maintainers backport security patches, even when PHP core does not.</p>&mdash; Ben Ramsey (@ramsey) <a href="https://twitter.com/ramsey/status/1167846153024626689?ref_src=twsrc%5Etfw">August 31, 2019</a></blockquote>

<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>Except that you are at the mercy of a maintainer who then decides if a security feature should be backported. As these are backported, they introduce divergences in the codebases, both from the upstream provider (say PHP Internals) as well as have the possibility of changing behavior. There will be some security fixes that cannot be backported because they deal with newer or changed code, so now the maintainer has to decide on re-implementing the fix or leaving it out.</p>

<p>If you hold off because "someone else provides support," or "we just use what is in the repositories," you are giving up and deciding to stay where you are. It will only end in you still being behind and spending more time and money when you <em>have</em> to make an upgrade.</p>

<p>And if you are holding off because you still use <code>mysql_*</code> functions... Stop. Get off your butt and change it. You've literally had <em>years</em> to fix this.</p>

<h2 id="what-can-you-do%3F">What Can You Do?</h2>

<p>First and foremost, start your planning <strong><em>now</em></strong>. Depending on the quality of your application and the age of your infrastructure, you may have little or a lot of work to do. The sooner you start planning the easier time you will have.</p>

<h3 id="sell-the-upgrade">Sell The Upgrade</h3>

<p>You can start small and start to make some changes right away while you make the business case. Explain to upper management how not doing these upgrades are going to leave you in a bad spot. Here are a handful of things you can use:</p>

<ul>
<li>Finding developers who want to work on older software is always hard. Finding developers who want to work in old languages is harder.</li>
<li>Even with backported security you are at a security disadvantage. It takes time for the backports to happen if they actually happen at all.</li>
<li>Libraries and tools move on. You will be left with substandard tooling compared to competitors who stay up-to-date.</li>
<li>As libraries update, that leaves you to maintain them (especially manually back-porting security patches). This is more work for your developers, and less time you can put toward new features that matter.</li>
<li>To do an upgrade at a later date means spending even more time not working on new features. This can put you behind competitors.</li>
<li>If you use AWS/Azure, moving to newer PHP versions can get an automatic optimization, meaning fewer servers, which means less cost.</li>
</ul>

<p>If you find it impossible to sell doing the upgrades, you have two options - do it anyway, or leave.</p>

<p>If you think you can get away with it, or you have the power to do it, go ahead and just do the upgrades. If you really look into it you might find straight version upgrades are trivial, but worst case you will get an accurate estimate on how long the upgrade will take. Remember, the longer you delay, the longer the upgrade will take.</p>

<p>If a company cannot take the time to understand why being up-to-date is a business advantage, then move on to a company that does understand it.</p>

<h3 id="run-multiple-versions-of-php">Run Multiple Versions of PHP</h3>

<p>I use both <a href="https://www.docker.com">Docker</a> and <a href="https://github.com/phpenv/phpenv">phpenv</a> to handle multiple versions of PHP on a single machine. I can switch between them with some changes and try out my code as I upgrade.</p>

<p>For Docker, you should just need to change your Dockerfile or switch containers out. It will depend on how your setup is configured. A huge selling point of Docker is the ability to swap out containers, so if you are using Docker (and honestly, if you are using Docker but can't upgrade PHP... WTF?!), this should be fairly easy.</p>

<p>For locally installed PHP, I love phpenv. It allows you to have multiple versions installed at once, and has directions for setting up both PHP-FPM and Apache httpd.</p>

<p><a href="https://laravel.com/docs/5.8/homestead">Laravel Homestead</a> is one other option. It is a vagrant-based virtual machine with PHP 5.6 through PHP 7.3. Even if you do not use Laravel, you can throw a normal PHP application in there as well and start switching PHP versions.</p>

<p>PHP tries very hard to keep backward compatibility, so unless you are using a deprecated feature like <code>mysql_*</code> functions your app might just work out of the box.</p>

<h3 id="figure-out-code-changes">Figure Out Code Changes</h3>

<p>Look at upgrading your PHP version first. If you are on PHP 7.0 or 7.1, great! PHP does an awesome job at adhering to SemVer, so there should be little work you need to do for the minor versions. The PHP manual contains release and migration notes for each version since 5.0. Read the migration notes for each version:</p>

<ul>
<li><a href="https://www.php.net/manual/en/migration70.php">5.x to 7.0</a></li>
<li><a href="https://www.php.net/manual/en/migration71.php">7.0 to 7.1</a></li>
<li><a href="https://www.php.net/manual/en/migration72.php">7.1 to 7.2</a></li>
<li><a href="https://www.php.net/manual/en/migration73.php">7.2 to 7.3</a></li>
</ul>

<p>Ignore new features and focus on any changes you need to make.</p>

<p>Tools like <a href="https://github.com/phpstan/phpstan">phpstan</a> can check your code against PHP 7 and make suggestions on things you will need to change. As I mentioned before, I took our PHP 5.6 codebase and ran it against PHP 7.2 and only had a handful of things to change. You may have more, but it gives you a detailed list of what needs to be fixed.</p>

<h3 id="actually-upgrade">Actually Upgrade</h3>

<p>Most mainline distributions have good maintainers that keep PHP up-to-date. These packages go through the same process for inclusion as any other package, so convince your systems or operations team to update. If they provide pushback, you will need to come up with a good business reason (the packages are official, safe, newer PHP has better security support, is faster, etc). Since they are official repositories, it is not that hard for them to get added into a system. It's not like the operations team needs to compile it themselves.</p>

<p>For Ubuntu/Debian there is the set of packages from Ondřej Surý, available at <a href="https://deb.sury.org/">https://deb.sury.org/</a>. He has worked for years to provide high-quality Debian packages, and all of the PHP packages are either directly from him or based on his packages.</p>

<p>On RHEL/CentOS/Fedora, you have packages from Remi Collet, available at <a href="https://rpms.remirepo.net/">https://rpms.remirepo.net/</a>. He maintains packages for core PHP as well as a bunch of extensions, for various versions of PHP. As with Ondřej, Remi is the package maintainer for Fedora, so these are as official and safe as you are going to find for RPM-based systems.</p>

<h2 id="don%27t-delay%2C-start-now">Don't Delay, Start Now</h2>

<p>I hope at this point I have convinced you why something as nice sounding as LTS releases are not as cozy and safe as they make themselves out to be. You are sacrificing time and money later for perceived stability today.</p>

<p>A project that stays up-to-date, and puts into processes that help update in real-time, will be able to stay competitive longer. If security is something that is an ingrained part of software development, why isn't upgrading? Like security, upgrading isn't something you bolt-on, or do later.</p>

<p>Stop making excuses, and start upgrading.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[Upgrading To Sculpin 3]]></title>
            <link href="/2019/04/04/upgrading-to-sculpin-3/"/>
            <updated>2019-04-04T00:00:00+00:00</updated>
            <id>/2019/04/04/upgrading-to-sculpin-3/</id>
            <content type="html"><![CDATA[<p>So not that long ago, <a href="https://blog.sculpin.io/2019/04/10/sculpin-3-is-here/">Sculpin released version 3.0</a> thanks to a bunch of hard work by <a href="https://twitter.com/Beryllium9">Kevin Boyd (@Beryllium9)</a>. This brought Sculpin up-to-date with current versions of PHP, and updated a bunch of stuff under the hood. It also showed why I love Sculpin &mdash; because it ultimately is a very simple idea, a major upgrade like this barely caused any problems, and the upgrade was very easy.</p>

<p>First off, I deleted my old sculpin.json and sculpin.lock files. I still used them because I'm a horrible person, and the old deploy system I used to use still used the phar version. Funny enough, I used <code>vendor/bin/sculpin</code> locally since I had long since stopped using PHP 5, but my old build system for the blog still ran PHP 5.6 and used the phar. I know, I know. That's all been updated now.</p>

<p>I then deleted my <code>composer.lock</code> file and updated my <code>composer.json</code> to use the version 3 tags for Sculpin:</p>

<pre><code>{
    "require": {
        "sculpin/sculpin": "~3.0"
    }
}
</code></pre>

<p>As expected, when I ran <code>composer install</code> it nuked a bunch of old libraries and dragged in all the new ones. This was a super simple <code>composer.json</code>, so there was not any other conflicts.</p>

<p>I then ran <code>vendor/bin/sculpin generate --server --watch</code> to build the site and see what happened. I did have two very minor issues. One was that I had a <code>title:</code> front matter that started with the back tick ("`"). I removed that because it was not a big deal. The other issue was that one of my source files was missing an extension, and it would give the following error:</p>

<pre><code>Exception: Argument 2 passed to Sculpin\Core\Formatter\FormatterManager::formatBlocks() must be of the type string, null given
</code></pre>

<p>This took a bit of tracking down, but it looks like if the extension is missing Sculpin cannot figure out the type of content it is, and therefore does not process it. When it grabs the content, it gets a "null" value, and the system complains. Another small issue, so adding ".md" to that file fixed it right away! I'll be submitting a bug report for that.</p>

<p>Overall though, the process took about 20 minutes to upgrade, with almost all of that time being the missing extension problem. Sculpin continues to be a very simple, robust, and yet easy-to-use system, and I love it for that.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[cron-expression Updates and Moving Forward]]></title>
            <link href="/2017/10/12/cron-expression-update/"/>
            <updated>2017-10-12T00:00:00+00:00</updated>
            <id>/2017/10/12/cron-expression-update/</id>
            <content type="html"><![CDATA[<p>In late 2015/early 2016, I took over maintenance for the <a href="https://github.com/mtdowling/cron-expression"><code>mtdowling/cron-expression</code></a> library. This was a library that we used at my then day job quite heavily, as it was part of our daily processing and scheduling for customers around the world. It let us schedule cron jobs relative to them, instead of us, without much work. When Michael reached out on Twitter for someone to help maintain it, I jumped at the chance.</p>

<p>For those that do not know what the library does, <code>cron-expression</code> just checks to see if a cron expression (something like <code>0 0 * * *</code>) is valid, can check to see if it matches the current time and needs to run, and can determine future run dates. If you need a simple way to schedule things, cron itself is a very useful and well understood syntax. <code>cron-expression</code> does not run your code though, it is mostly a validation library.</p>

<p>Much like Sculpin, it's a pretty stable project so there wasn't a ton of movement on it development-wise. Some bug fixes here, few enchancements there, but nothing major. At the beginning of 2017 I pushed out a 1.2.0 release. I had decided that I would only support PHP 7.0 going forward. By this time I had learned that Laravel was using this library under the hood, so I wanted to get one final release done under the older PHP 5.x branch. v2.x and later would all be PHP 7.x compatible.</p>

<p>Then I started digging into a bug, and that bug turned into a few bugs, in regards to validation. As it turns out, the regex that the library used was really loose and let a lot of stuff through. This did not seem to affect valid expressions, but it allowed a lot of junk through. As time went on more and more reports with this started coming through. The underlying logic had to change. I started working on this as the main focus.</p>

<p>Then I got this bug report - <a href="https://github.com/mtdowling/cron-expression/issues/153">#153, "Wrong nextRunDate for * rules"</a>. Long story short, step ranges in the library were broken. Someone had discovered a bug in Laravel's cron system that caused the expression to validate on the incorrect set of months. Even I misunderstood how that worked, so I ended up diving into the source code for <a href="https://github.com/cronie-crond/cronie">cronie</a>, one of the main cron systems shipped with Linux systems.</p>

<p><code>cron-expression</code> had gotten our implementation <em>completely</em> wrong. I re-implemented a bunch of our validation logic to the same basic way that cronie does, and this actually ended up not only fixing our stepping issue but also our data validation woes. The new code was a bit more compact and more unified in how the library does validations. Overall this was good.</p>

<p>There was a big problem though, and that was that this was a huge backward compatibility break. When a bug has survived long enough and people rely on that behavior, it is no longer a bug - it's a feature. So our bad stepping fix has the potentional for breaking a huge number of systems, even if that behavior is bad. People rely on it.</p>

<p>Sufficient time has past for a v2.0.0 release. That will be happening today. All the fixes will be available in packagist as soon as it updates.</p>

<p>The repo will also be moving to a new repository: <a href="https://github.com/dragonmantank/cron-expression">dragonmantank/cron-expression</a>. The reasons for this are twofold - one, I am not and cannot be the admin of the original repo as it is not an organization, it is a personal repo. I cannot wire in new build or checking systems at all. Two, this is the perfect time to do this break. v2.0.0 is incompatible with v1.0.0 because of the stepping issue, and this will let frameworks or other installs that rely on the v1.x branch to move at their leisure without breaking them.</p>

<p>The old repo will no longer be maintained, but I will still watch the issues. The existing issues will still be evaluated and looked at, just implemented in the new repo. The old package will remain in packagist for those that need it. All new work will be done in the new repo against the 2.x branch.</p>

<p>If anyone has any questions, feel free to hit me up on twitter at <a href="https://twitter.com/dragonmantank">@dragonmantank</a>.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[Taking Back My E-mail]]></title>
            <link href="/2017/05/08/taking-back-my-email/"/>
            <updated>2017-05-08T00:00:00+00:00</updated>
            <id>/2017/05/08/taking-back-my-email/</id>
            <content type="html"><![CDATA[<p>A few days ago at our family dinner I talked about how <a href="http://www.wwe.com/videos/alexa-bliss-is-setting-off-everyones-amazon-echo">Alexa Bliss was setting off Amazon Echos</a> during her matches. This is a slightly funnier, and less expensive, version of the <a href="http://www.nbcsandiego.com/news/local/TV-News-Report-Prompts-Amazon-Echo-to-Buy-Dollhouses-410162975.html">TV Report prompts Amazon Echos to buy dollhouses</a> story. I showed my wife a video of how the commentators were saying her name over and over, and an Echo was responding.</p>

<p>My youngest son said it would be cool to have one, and asked if we could get one. I said no. My wife and I are on the same page about this, but the idea of a device, which I have no control over, listening to everything being said is not something we want in our house. It's not just me not liking the Amazon Echo, either - I don't want a Google Home in the house either.</p>

<p>That lead to a discussion about why having a listening device in the home is bad. We expect a certain amount of privacy in our own home regardless of the fact that we are not doing anything against the law. I just do not want my private conversations overheard by a device that sends all of that back to a server, where it sits forever. <a href="https://qz.com/873656/an-amazon-echo-might-have-heard-what-happened-on-the-night-of-a-murder/">Police have already tried to get Echo recordings</a> for a murder, though if Amazon is to be believed unless someone said "Alexa, help me!" nothing should have been recorded. Even if it had recorded something, Amazon states that such voice recordings are encrypted.</p>

<p>Knowing how well software is built and how often "encrypted" data gets accessed means I do not want my words recorded and stored on Amazon's, or anyone else's, servers. Hell, I work for a company who designs and sells a network appliance to find bad traffic on networks. When someone has access to servers or the network, getting access to information is trivial. Amazon now also sells Echo Look, which is a camera that currently helps you dress fashionably. I do not even have to talk about how creeped out that would make me feel.</p>

<p>We grow increasingly reliant upon companies that make our lives more convenient. I've used Google's e-mail, calendaring, and document storage services for years because it was easy to use, worked directly with my phone, and meant I did not have to worry about e-mail. There are some nice perks to that, like online document editing, having airline data directly parsed and made available, intelligent spam filtering, and device syncing, all to name a few.</p>

<p>If I do not want my speech hosted on Amazon or Google servers... why my textual life hosted and sifted through by Google?</p>

<h2 id="taking-back-my-e-mail">Taking Back My E-mail</h2>

<p>The first thing I've decided to move off of Google, and back into my own control, is my e-mail.</p>

<p>I have a lot of e-mail addresses, and I have been attempting to consolidate them into just a few. Google made that pretty easy. I'm grandfathered into the old G Suite setup of it being free for 100 users, but I took liberal advantage of domain aliases and catchall e-mail addresses.</p>

<p>I looked at services like <a href="https://www.fastmail.com/">FastMail</a>, <a href="https://protonmail.com/">ProtonMail</a>, and <a href="https://kolabnow.com/">Kolab Now</a>. All three of them are highly regarded, with Kolab and ProtonMail being open source projects. Moving my domains and setting up aliases though, that would end up being very, very costly. Kolab charges around $50 for just setting up a single domain alias. FastMail and ProtonMail would start to get very pricy as I moved all my domains over.</p>

<p>ProtonMail also lost points as I would have to use a web browser on my desktop. I want my e-mail in any app of my chosing. I am not paranoid enough to think someone is trying to get into my e-mail, so the security aspect of ProtonMail was not a huge selling point.</p>

<p>I decided to host my own e-mail.</p>

<h2 id="running-my-own-server">Running My Own Server</h2>

<blockquote>
  <p>"Email is one of the bastions of the decentralised Internet and we should hang onto it" - <a href="https://news.ycombinator.com/item?id=12282231">Nux, Hacker News</a></p>
</blockquote>

<p><a href="https://joind.in/event/lone-star-php-2015/your-inner-sysadmin">I'm not afraid of servers or their maintenance</a> at all. My career started with maintaining servers and dealing with configuring them, so why not just run my own e-mail server?</p>

<p>I know, I know. I should not run my own e-mail server because:</p>

<ul>
<li>There are lots of moving parts</li>
<li>It's not just e-mail, its virus scanning, spam filtering, e-mail access</li>
<li>Maintenance is time consuming</li>
<li>Blacklist maintainers are cold, heartless beings that never remove IPs</li>
<li>Russians will hack me</li>
<li>E-mail isn't secure</li>
<li>I have to trust my host</li>
</ul>

<p>Frankly, most of the above is FUD. If we, as developers, are telling people to run things like Docker or set up their own VPS because "it's the right way to run a web app," then running an e-mail server should not be some scary thing. Granted, I am not going into this blind as I've set up an e-mail server before, but come on people. It isn't that bad.</p>

<p>I <em>do</em> want to cut down on the amount of work I have to do. I first looked at <a href="https://mailinabox.email/">Mail-in-a-Box</a>, which is a set of scripts that sets up a mail server. I decided against it as it is pretty much all or nothing. You run and set up the box the way it wants to be set up and that's it. Want to do something else with the box? Too bad.</p>

<p>I then found <a href="https://github.com/sovereign/sovereign">sovereign</a>. It is a set of Ansible playbooks that set up a server that includes e-mail as one of the various services. Since it is just based on Ansible configuration and I know how to work with that, I decided on sovereign.</p>

<h2 id="setting-up-the-server">Setting up the Server</h2>

<h3 id="the-server">The Server</h3>

<p>I use <a href="https://m.do.co/c/142755e4323e">Digital Ocean</a> for a lot of projects. As I said before, privacy from foreign powers is not a current concern I have so hosting a server in the US is fine for the moment. I created a VPS with Debian 8 as that was what sovereign recommended.</p>

<p>The next thing I did was check the assigned IP on <a href="http://multirbl.valli.org/">http://multirbl.valli.org/</a>. This site will check a bunch of well used DNS blacklists to see if the IP that Digital Ocean gave me has had a shady history. The first one... well, once it hit twenty blacklists I deleted the VM and rebuilt it on a different server.</p>

<p>The second one was only on four blacklists. That is a much more manageable number to deal with. Most blacklists are fairly easy to get removed from, and if I'm only on four I will take my chances.</p>

<p>With that sorted out I followed through the rest of the instructions in the sovereign README file. It took only a few minutes of prep before running the Ansible playbooks.</p>

<p>I started off with a domain that did not previously have e-mail associated with it, to test things out. That way if it all went to Hell I wouldn't lose any e-mail. Ran the scripts and after about 15 minutes ran out of memory on the server.</p>

<p>I tried to work around it, but with everything running 512mb was not big enough. I deleted the server and reprovisioned a bigger one. Not only did it have more memory, it also had more hard drive space.</p>

<p>That worked better. About 20 minutes later I had a server up and running!</p>

<h3 id="shutting-down-services">Shutting down Services</h3>

<p>sovereign comes with a bunch of services installed, and since this was my first run through I let it install everything. Once I confirmed everything was working well, I SSH'd into the server and disabled a bunch of stuff I did not need, like ZNC. I happily pay IRCCloud for IRC bouncing.</p>

<p>Most servers are compromised because of services running on the box. It is rare that an actual OS exploit is the problem. I removed the services I did not need from the <code>site.yml</code> file, and shut down services I did not need.</p>

<p>I did want to keep the webmail so I just disabled a bunch of vhosts as well. So far so good.</p>

<h3 id="multiple-domains">Multiple Domains</h3>

<p>sovereign actually makes it pretty simple to set up multiple domains on a single install. <code>group_vars/sovereign</code> houses all of the domains and accounts you want to set up. Adding a second domain was a simple as adding a new entry under <code>mail_virtual_domains</code>, and the associated accounts under <code>mail_virtual_users</code>.</p>

<p>Another Ansible run, and my legit domains I wanted to move off of Google were all set up. I tested logging in via Evolution, the e-mail client that comes with GNOME and what I use on my desktop and laptop. Auto config did not work, but I manually set up IMAP+ with no issues. I could send e-mail to and from accounts without a problem.</p>

<p>That left me figuring out how to get catchall e-mail addresses to work. There was <a href="https://github.com/sovereign/sovereign/issues/687">an open issue</a> on the Github project, so I dug around a bit. sovereign uses a Postgresql-backed e-mail system for the users, so finding how to do catchall addresses was a bit of a pain. Turns out it is really hard and not well documented. This wasn't a problem with sovereign, but postfix itself.</p>

<p>I found instructions for how to do it at <a href="https://workaround.org/ispmail/wheezy/connecting-postfix-to-the-database">https://workaround.org/ispmail/wheezy/connecting-postfix-to-the-database</a>. I created a new file at <code>roles/mailserver/templates/etc_postfix_pgsql-email2email.cf.j2</code> and modified the Ansible scripts to use it per the instructions on workaround.org.</p>

<p>Another Ansible deploy, and I tested it from my old Hotmail address.</p>

<p>I did not get my e-mails.</p>

<p>Checking the logs I was getting greylisting errors. Turns out Hotmail/Outlook.com get flagged quite regularly for spam, so my server was greylisting them. I added the following to <code>/etc/postgrey/whitelist_clients</code> and restarted postgrey:</p>

<pre><code># Outlook.com
104.47.0.0/17
40.107.0.0/16
/.*outbound.protection.outlook.com$/
/outlook/
</code></pre>

<p>I sent another e-mail, and my catchall started working! Well, technically, it was working before, just my greylist service was slowing Outlook.com down.</p>

<h3 id="moving-from-google">Moving from Google</h3>

<p>After all my testing, I was ready. I went into my DNS providers and added the needed DKIM, DMARC, and MX records to point to my new server. I waited about fifteen minutes, as the TTL on all the records was 900 seconds, and tried to send an e-mail. It showed up in my new inbox.</p>

<p>I actually started recieving legitimate e-mail as well. I noticed some, like e-mails from Twitter, were coming in about 2 hours later than their timestamp. Quick look at the logs showed I'm greylisting Twitter's servers as well. Everything was working though, as grey listing is a normal part of day-to-day e-mail. If I'm greylisting someone and it's important, there are many other ways to get in touch with me ASAP.</p>

<p>I have years worth of e-mail sitting in GMail though. I wanted to move all of that over.</p>

<p>After some searching I came across <code>imapsync</code>, which is an open source tool that syncs mail from one IMAP server to another. I followed the directions at <a href="http://blog.jgrossi.com/2013/migrating-emails-using-imap-imapsync-tofrom-gmail-yahoo-etc/">http://blog.jgrossi.com/2013/migrating-emails-using-imap-imapsync-tofrom-gmail-yahoo-etc/</a> on compiling and setting it up on my Ubuntu 17.04 desktop.</p>

<p>I then followed the directions at <a href="https://imapsync.lamiral.info/FAQ.d/FAQ.Gmail.txt">https://imapsync.lamiral.info/FAQ.d/FAQ.Gmail.txt</a> for syncing from GMail to my local server. I settled on the following command to run:</p>

<pre><code>imapsync \
           --host1 imap.gmail.com \
           --ssl1 \
           --user1 me@googlehostedemailaddress.com \
           --password1 p@ssw0rd \
           --authmech1 plain \
           --host2 mail.newmailserver.com \
           --ssl2 \
           --user2 me@googlehostedemailaddress.com \
           --password2 n3wp@ssw0rd \
           --useheader="X-Gmail-Received" \
           --useheader "Message-Id" \
           --automap \
           --regextrans2 "s,\[Gmail\].,," \
           --skipcrossduplicates \
           --folderlast  "[Gmail]/All Mail"
</code></pre>

<p>GMail has a 2.5GB limit on mail transfer per day, but I was below that limit. I fired up the command and was immediately shut down by Google. They consider PLAIN authentication an insecure way to authenticate (for good reason), but they provided a link and explanation. I <a href="https://support.google.com/accounts/answer/6010255?hl=en">followed the directions</a> and ran the command again.</p>

<p>Nearly 48 hours to download all of the e-mail. It worked though. I started to see all of my folders and e-mail show up in my new server.</p>

<p>With that, I was off of Google's mail servers.</p>

<h2 id="security-concerns">Security Concerns</h2>

<p>E-mail is not secure. It was never designed to be. Even running something like ProtonMail, which touts it's encryption, does nothing to encrypt e-mails once it leaves their servers. Anyone can sniff e-mail on the wire. That's the nature of e-mail.</p>

<p>What is a concern is authentication, and access to the box.</p>

<p>SSH access is locked down to key-based authentication. No users have passwords. sovereign also sets up fail2ban, which should stop any brute force attacks. I'll probably supplement that with <a href="https://ossec.github.io/">ossec</a>. I should be able to get that installed with a new Ansible role.</p>

<p>For any virtual hosts on the machine as well as IMAP, sovereign sets up <a href="https://letsencrypt.org/">Let's Encrypt</a> for SSL certificates, as well as scripts to renew them when needed. sovereign sets up Roundcube for web mail, which is protected with this, and any new subdomains it activates will be protected as well (with the appropriate changes to Ansible).</p>

<p>E-mail access and sending require authentication. Most servers get blacklisted due to the lack of authentication on the sending portion. Authentication is set up by default with sovereign, and all of the authentication happens over SSL/TLS.</p>

<p>My only main job is to update the base OS and packages every so often. I think I'm pretty well set up other than that.</p>

<h2 id="step-one-completed">Step One Completed</h2>

<p>It's been a few days now and so far so good. The only hard thing thus far was setting up the catchall addresses. I'm getting e-mail on my laptop, desktop, and phone without an issue. I've tested sending mail to different services and so far have not been blocked. The e-mail transfer from GMail to the new server has been taking a while, but it's pretty hands off once it starts.</p>

<p>I am not totally off of Google yet. Next step is to move all of my calenders, which I believe I can do with <a href="https://owncloud.org/">ownCloud</a>. ownCloud is an open source file, calendar, and contacts, storage/sharing service that gets installed as part of sovereign. ownCloud should actually handle both moving my calendar from Google Calendar, but also my files from Google Drive.</p>

<p>I also have a few patches that I want to clean up and send to sovereign. One nice one is the catchall setup, but then I've also been working with the Ansible scripts a bit to make it smaller to run. By default it runs all the tasks, but for something like adding a single e-mail address that means a 15-20 minute run.</p>

<p>So far I've been impressed with sovereign. I'd highly suggest looking into it if you want to run your own server.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[Post Open Source is a Fallacy]]></title>
            <link href="/2017/05/07/post-open-source-is-a-fallacy/"/>
            <updated>2017-05-07T00:00:00+00:00</updated>
            <id>/2017/05/07/post-open-source-is-a-fallacy/</id>
            <content type="html"><![CDATA[<h5 id="this-was-originally-published-on-medium.com">This was originally published on <a href="https://medium.com/@dragonmantank/post-open-source-is-a-fallacy-6f39b8f73f25">Medium.com</a></h5>

<p>If you aren’t familiar with how Open Source came to be the way it is today, please read <a href="/2017/01/04/the-history-of-open-source/">“A History of Open Source,”</a> which is effectively Part 1 of this small series of posts.</p>

<blockquote>
  <p>“younger devs today are about POSS — Post open source software. fuck the license and governance, just commit to github.” — James Govenor</p>
</blockquote>

<p>There are two basic licensing camps in the Open Source world — the world of copyleft and the GPL, and the permissive realm of BSD/MIT. Since 2000, a shift has been made toward permissive licensing.</p>

<p>Is one better than the other? If so, why?</p>

<p>The trend <em>does</em> seem to indicate that the current development environment is favoring developer ease-of-use for code (permissive licensing) over a requirement of code sharing (copyleft). The general idea of permissive licensing is to make the Developer’s life easier, but what if there was an even more permissive license than permissive licenses?</p>

<blockquote>
  <p>“Empowerment of individuals is a key part of what makes open source work, since in the end, innovations tend to come from small groups, not from large, structured efforts. ” — Tim O’Reilly</p>
</blockquote>

<p>As with everything, the internet change how we shared code.</p>

<p>Github did what no other source code sharing system did, and that was make it easy to share code. Now before you jump down my throat, let me clarify. While there had been Sourceforge for open source projects, and Google Code for sharing code, neither were that great, let alone for a new developer getting started.</p>

<p>Github made it easy for anyone to throw code up on a website, and made it easy to get that code down to your machine. They invested in teaching people to use git and made the case for why you should use them. They made open source project hosting free.</p>

<p>For many years Github actively made the decision to not enforce a license on code that was uploaded as open source repositories. Github left it up to the maintainer to sort that out. 80–90% did not bother with a license. That is even after a change in 2013 where Github decided to start asking about licensing when new projects were created.</p>

<blockquote>
  <p>“Software is like sex: it’s better when it’s free.” — Linus Torvalds</p>
</blockquote>

<p><strong>“All information should be free”</strong> has been a tenant of hackers since the 1960’s. Instead of restricting usage of code, why not just make it free? Completely free?</p>

<p>There has been a recent trend toward the idea of releasing software under much more lax licenses that veer more toward Public Domain than they do an established Open Source license. In some extreme cases code is being released without any license as to how it can be used, under the assumption that no license is the same as Public Domain.</p>

<p>The driving force behind this idea is “I don’t care what you do with my code.” It’s a noble idea that hearkens back to the 1960s. Code does not need all of these rules around sharing and usage, just take my code and do what you want.</p>

<p>There are even licenses that support this, due to the way that copyright works. Licenses such as WTFPL (Do What the Fuck You Want to Public License) and the DBAD (Don’t be a Dick) Public License are designed to get out of the nitty-gritty thinking when it comes to sharing code — here is code, just use it.</p>

<h2 id="the-first-fallacy%E2%80%8A%E2%80%94%E2%80%8Ano-license-is-ok">The First Fallacy — No License is OK</h2>

<blockquote>
  <p>“Linux is not in the public domain. Linux is a cancer that attaches itself in an intellectual property sense to everything it touches. That’s the way that the license works.” — Steve Ballmer</p>
</blockquote>

<p>Licensing is restrictive no matter which camp you are in, and by making licenses you make it harder to integrate software. For example, the company you work for probably will not use GPL software for fear of having to release the source code of their flagship product which contains many proprietary ideas and business rules.</p>

<p>In the US, copyright is automatically assigned. There isn’t a special form you have to send into the government, when you create something copyright is assigned to you, or whomever hired you to do the work. There are things you can do to further prove that you are a copyright holder, but simply publishing code online marks you as the copyright holder. Created works do not automatically go into the public domain anymore.</p>

<p>Copyright holders hold all the cards. Just because you can see the source code for a piece of software doesn’t mean you can use it without repercussion, just like finding a $100 bill on the ground doesn’t automatically make it yours.</p>

<p>We live in a world controlled by copyright, and until such a time as copyright laws change, releasing software without a license is a dangerous move, even potentially more dangerous than other licenses.</p>

<p>Unless you have something in your hand that says you are allowed to use the software, you are right back at an AT&amp;T Unix situation. Otherwise the copyright holder can pick up their ball and go home, or worse, sue you for using their software.</p>

<h2 id="the-second-fallacy%E2%80%8A%E2%80%94%E2%80%8Alax-licenses-are-open-source">The Second Fallacy — Lax Licenses are Open Source</h2>

<blockquote>
  <p>”From my point of view, the Jedi are evil!” — Anakin Skywalker</p>
</blockquote>

<p>The current development landscape very much carries a “Fuck It, Ship It” attitude. It is a core mentality of many tech startups and developers. Getting an MVP out and validated is more important than wasting time thinking about licensing. We are developers that use open source tools so we feel the need to give back, so we release what code we can.</p>

<p>In an ideal world you might just release your software as Public Domain, but there are many countries that do not recognize public domain, and public domain has different definitions depending on where you are. You need some sort of licensing.</p>

<p>In a world where Public Domain is not really a good thing to release code under, we end up with these licenses that absolve the original developer from putting restrictions on the code.</p>

<ul>
<li>Don’t Be a Dick</li>
<li>Do What The Fuck You Want</li>
<li>Don’t Be Evil</li>
</ul>

<p>Developers do not want to have to mess around with licensing. Public domain is not a viable choice. “I just want to release code.” Developers ended up coming up with very lax software licenses where they basically say they don’t care what you do with the code.</p>

<p>Public Licenses are also littered vague concepts, like the DBAD. What defines being a dick? Who defines it? While there are examples in the licenses, DBAD even says that it is not limited to the examples given. What happens when someone decides you are being a dick with their software when you don’t think you’re being a dick? Douglas Crockford famously added “The Software shall be used for Good, not Evil” to the MIT license used for JSMin. Who determines what is evil?</p>

<p>These lax licenses are coming from a good place, and the people that come up with them are not ignorant or stupid people. The only problem is that the legal system doesn’t like vague concepts, and from a business standpoint vague definitions can really put you in a bad spot if someone decides you are being a dick, or doing something evil.</p>

<hr />

<p>Developers that are fed up with licenses and procedure and bureaucracy are, in my mind, ignoring sixty years of history in computing. The “Just Ship It” attitude and the “Just Commit It” culture of many groups feeds into this idea that the early MIT hackers would have loved — make the software available and good things will come of it.</p>

<p>As humans though, we screw it up. We tried sharing software without licensing and, honestly, that did not end up working out. Hell, we cannot even agree on how software should be shared. Should be be copy-left? Should it be permissive? Can’t I just give it away?</p>

<p>Open Source licenses were chosen because they had been vetted and have the legal verbiage to make their usage cases safe (permissive or copyleft). While it might suck to have to put a license on something, sometimes the right, and safe, thing to do is suck it up and spend thirty seconds deciding if you want a permissive license or a copyleft license.</p>

<p>Saying that we are beyond Open Source and the need for licenses is just a lie developers are telling themselves when they don’t want to think about what happens to their code. You created it, take thirty seconds make sure that the code is released properly and will be used properly.</p>

<p>Go to https://opensource.org/licenses/alphabetical and take a look at the licenses that are available. There are many out there, as well as the venerable GPL and BSD licenses. If that list is daunting, check out http://choosealicense.com/ from Github.</p>

<p>Don’t ignore sixty years of history.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[The History of Open Source]]></title>
            <link href="/2017/01/04/the-history-of-open-source/"/>
            <updated>2017-01-04T00:00:00+00:00</updated>
            <id>/2017/01/04/the-history-of-open-source/</id>
            <content type="html"><![CDATA[<h5 id="this-was-originally-published-on-medium.com">This was originally published on <a href="https://medium.com/@dragonmantank/a-history-of-open-source-733dd2836e13">Medium.com</a></h5>

<p>The world of computers is an odd place. In the span of my own lifetime, I’ve gone from not owning a computer because it was too expensive to <a href="https://github.com/nickplee/BochsWatchOS">owning a watch that has more computing power than the first computer I ever owned</a>. The amount of computing power in my house is mind boggling when I think about it compared to twenty years ago.</p>

<p>Software, too, has evolved. I started off with DOS, then switched to Windows 3.1. I never personally owned a modern Mac until a few years ago, but used them throughout school. There was always the PC vs Mac rivalry but I didn’t care for the most part. I used a PC because it played games. That was up until I found Linux.</p>

<p>Somewhere around 2000, I was at a book store and came across a boxed set for Linux Mandrake. I think it was something like fifty dollars and I had enough cash for it. I installed it on a second machine I had and was amazed.</p>

<p>I quickly ran into problems running it and had to search for help online. I started to learn about sharing source code, how to patch and recompile programs, and this whole world of sharing code. The GPL made all of this possible.</p>

<p>This GPL thing intrigued me though. Here was this document that told me I was allowed to modify and share the source code to software as long as I made my changes public. That all made sense. If something didn’t correctly I should be able to fix it and let other people know of the fix. I could not do that with Windows, or Microsoft Office, or Photoshop on the Macs at school.</p>

<p>Why did you need this documentation, this proof that I was allowed to do this and not get in trouble?</p>

<p>That’s the world we live in.</p>

<p>How did we get here?</p>

<blockquote>
  <p>“All Information should be free” — Steven Levy, “Hackers: Heros of the Computer Revolution”, on the Hacker Ethics</p>
</blockquote>

<p>In 1956, the Lincoln Laboratory designed the TX-0, one of the earliest transistorized computers. In 1958 it was loaned to MIT while Lincoln worked on the TX-2.</p>

<p>The TX-0 amazed the early computer hackers at MIT. It didn’t use cards, and it wasn’t cloistered away like the hulking behemoth of a machine from IBM that most people at MIT programmed against. You typed your program onto a ribbon of thin paper, fed it into the console, and your program ran.</p>

<p>Most importantly the TX-0 was not nearly as guarded as the holy IBM 704. Most of the hackers were free to do what they wanted with the machine. There was one problem, and it was somewhat of a large on — the TX-0 had no software.</p>

<p>So the hackers at MIT created what they needed.</p>

<p>Most of the software was kept in drawers and when you needed something, you reached in and grabbed it. The best version of a tool would always be available, and anyone could improve it at any time. Everyone was working to make the computer and the software better for everyone else.</p>

<p><strong>“All information should be free”</strong> was a core tenant of the hacker culture at MIT. No one needed permission to modify the software as everyone was interested in making the software, and thereby the TX-0, better.</p>

<p>As the machines changed and the software changed, this ethos did not. Software would be shared and changed to work on many different types of hardware, and improvements were added over time. Needed the latest copy? Just ask for it. Need to fix it? Just fix it.</p>

<blockquote>
  <p>“To me, the most critical thing in the hobby market right now is the lack of good software courses, books, and software itself. […] Almost a year ago, Paul Allen and myself, expecting the hobby market to expand, hired Monte Davidoff and developed Altair BASIC. […] The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however. 1) Most of these “users” never bought BASIC […]” — Bill Gates, “An Open Letter of Hobbyists”</p>
</blockquote>

<p>Fast forward to 1976. Computers have left the halls of universities that had the physical space needed in the 50’s and 60’s to house them and are entering people’s homes. They aren’t necessarily like the computers we have today, but all computers need software.</p>

<p>The ideals that the hacker culture at MIT did not change as it spread westward and as these computers invaded the lives of hobbyists. What has changed is the business around computers, and like anything when it comes to humans, there is always money to be made.</p>

<p><strong>“All information should be free”</strong> reared it’s head when the tape containing Altair BASIC disappeared from a seminar put on by MITS at Rickey’s Hyatt House in Palo Alto, California. Why? Ed Roberts, the “father of the personal computer” and the founder of MITS (Micro Instrumentation and Telemetry Systems) had decided to not give the Altair BASIC software to customers for free and instead charged $200 for the ability to write software.</p>

<p>For better or for worse, copies of Altair BASIC started appearing and being shared.</p>

<p>The landscape of computers and software development was changing. You no longer had one or two giant machines sitting in a university that had paid staff who could write software for them. With the TX-0 at MIT, it did not cost them anything extra to make and distribute software because there was no downside — there was not any money being exchanged. Just increases in workflow (and better gaming).</p>

<p>By the 1970s the need for software was seeing an ideological shift .Up until this point the creation of software was paid for indirectly by the universities and companies that needed it. Since most software was built by university developers, it was infused with the academic idea of sharing knowledge. Now software developers were seeing the need to develop generic software that many people would need to use.</p>

<p>That costs money, because developers have themselves and their families to support.</p>

<blockquote>
  <p>“Those who do not understand UNIX are condemned to reinvent it, poorly” — Henry Spencer</p>
</blockquote>

<p>The 1970s also saw the development of the Unix operating system developed at AT&amp;T by Ken Thompson, Dennis Ritchie, and others. Much like the original tools built by the hackers at MIT on the TX-0, Unix grew as it was licensed to other companies and universities.</p>

<p>Unix was alluring because it was portable, handled multiple users and multi-tasking. Standards help people develop software, and Unix became one of those standards. Before this was Multics for the GE-645 mainframe, but it was not without its faults.</p>

<p>AT&amp;T, however, was not allowed to get into the computer business due to an antitrust case that was settled in 1958. Unix was not able to be sold as a product, and Bell Labs (owned by AT&amp;T) was required to license its non-telephone technology to anyone who asked.</p>

<p>Ken Thompson did just that.</p>

<p>Unix was handed out with a licenses that dictated the terms of usage, as the software was distributed in source form. The only people who had requested Unix were ones that could afford the servers, namely universities and corporations. The same entities that were used to just sharing software.</p>

<p>The open nature of Unix allowed researchers to extend Unix as they saw fit, much as they were used to doing with most software. As fixes were developed or things were improved, they were folded into mainstream Unix.</p>

<p>The University of California in Berkeley was one of the most sought-after versions of the Unix code base, and started distributing their own variant of BSD in 1978, known as 1BSD, as an add-on to Version 6 Unix.</p>

<p>There was a hitch though. AT&amp;T owned the copyright to the original Unix software. As time went on AT&amp;T used software from projects outside of themselves, including the Computer Sciences Research Group from Berkeley.</p>

<p>Eventually AT&amp;T was allowed to sell Unix, but their commercially available version of Unix was missing pieces that were showing up in the Berkeley variant, and BSD tapes contained AT&amp;T code which meant users of BSD required a usage license from AT&amp;T.</p>

<p>The BSD extensions were what we would eventually call “Open Source,” in a permissive sense. BSD was rewritten to remove the AT&amp;T source code, and while it maintained many of the core concepts of and compatibility with the AT&amp;T Unix, it was legally different.</p>

<p>Much like with Bill Gates and Micro-Soft’s (eventually Microsoft) Altair BASIC, we start to see the business side of software start to conflict with the academic side of software, or more it conflicting with the hacker idea of software.</p>

<p>We also see one of the first true Open Source licenses come from this, which distinctly grants the end user special rights on what they can and can’t do with the software. Unix had its own license which up until commercialization (and a growing market) had been fairly liberal, but BSD wanted to make sure that Unix would be available to whomever needed it.</p>

<blockquote>
  <p>“Whether gods exist or not, there is no way to get absolute certainty about ethics. Without absolute certainty, what do we do? We do the best we can.” — Richard Stallman</p>
</blockquote>

<p>In 1980, copyright law was extended to include computer programs. Before that most software had freely been shared or sold on a good faith basis. You either released your software for everyone to use as public domain, or you sold it with the expectation that someone wouldn’t turn around and give it away for free.</p>

<p>Richard Stallman was, and probably is, one of the last true Hackers from the MIT era. In a sort of hipster-y kind of way he yearned for the time when software could be free, not shackled by laws or corporations. In a sense, software was meant to be shared and wanted to be shared. <strong>“All information should be free.”</strong></p>

<p>Stallman announced the GNU project in 1983, which was an attempt to create a Unix-compatible operating system that was not proprietary. NDAs and restricted licenses were antithetical to the ideals of free software that he loved.</p>

<p>The Free Software Foundation was founded in 1985, and along with it came the idea of “copyleft.” Software was meant to be free, and GNU Manifesto shared his ideas on GNU project and software in general. Whether you agreed with it or not, the GNU Manifesto was a fundamental part of what we now consider Open Source.</p>

<p>Stallman then conglomerated his three licenses, the GNU Emacs, the GNU Debugger, and the GNU C Compiler, into a single licenses to better serve software distribution — the GPL v1, in 1989.</p>

<p>The release of the GPL, the release of a non-AT&amp;T BSD Unix, and the flood of commerical software of the 80’s and 90’s, lead us to where we are today, and are the three major ideals that exist:</p>

<ul>
<li>Software should always be free — Copyleft</li>
<li>Software should be easy to use and make the developers lives easier — Permissive</li>
<li>Software should be handled as the creator sees fit — Commercial</li>
</ul>

<blockquote>
  <p>“younger devs today are about POSS — Post open source software. fuck the license and governance, just commit to github.” — James Govenor</p>
</blockquote>

<p><a href="http://redmonk.com/dberkholz/2013/04/02/quantifying-the-shift-toward-permissive-licensing/">Since 2000, a shift has been made toward permissive licensing</a>. One would argue that the GPL is dying. One could argue that developers are more interested in helping themselves than the actual idea of free software.</p>

<p>There is no denying that there are two camps when it comes to Open Source software, with the crux of the problem being exactly what software is supposed to be, or do for us.</p>

<p>The GPL says it should be free. In a way, Software is a living, breathing thing that wants to have the freedom to become the best possible piece of Software. It cannot do that when it can be locked up, chained, and held back from the passions that developers have for making software better. You, the end user, are better because Software can be changed to make everyone’s lives better, and you are better because you can change the Software.</p>

<p>The other camp is more pragmatic in a way. Permissive licensing wants software to be free because that helps Developers. You, the Developer, release software to make people’s lives better. You, the Developer, are more interested in knowing that people can use your software or code in a way that they see fit. The end user is better because the Developer had the freedom to change the software to make everyone’s lives better.</p>

<p>Is one better than the other?</p>

<p>Or should we just throw it all to the wind and ignore sixty years of computer history and forget about licenses?</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[Developing on Windows, 2016 Edition]]></title>
            <link href="/2016/11/13/developing-on-windows-2016/"/>
            <updated>2016-11-13T00:00:00+00:00</updated>
            <id>/2016/11/13/developing-on-windows-2016/</id>
            <content type="html"><![CDATA[<p>Recently, with the new Macbook refresh for 2016, many developers have taken a good hard look at whether or not they want to stick with macOS and the hardware. Traditionally Macbook Pros have been an excellent kit to use, and even I used one for travel up until earlier this year. They had powerful CPUs, could be loaded with a good amount of RAM, and had unparalleled battery life. The fact that macOS was built on a Unix subsystem also helped, making it easier for developer tools to be built and worked with thanks to the powerful command line interface.</p>

<p>The new hardware refresh was less than stellar. All jokes aside about the Touch Bar, it was not the hardware refresh many of us were looking for. While it does mean that 2015 models might be cheaper, if you are looking for a new laptop, is it time to possibly switch to another OS?</p>

<p>Linux would be the closest analogue in terms of how things work, but not all hardware works well with it. You will also lose a lot of day-to-day software, but the alternatives might work for you. If you are looking at a new OS, I'd heavily look at Linux on good portable hardware like a Thinkpad X or T series laptop. Even a used Thinkpad (except the 2014 models) will serve you well for a long time if you go the Linux route.</p>

<p>Up until about August, I ran Linux day-to-day. My job change meant that I had to run software that just did not work well under Linux, so I switched back to Windows. Back in <a href="/2015/07/01/developing-on-windows/">2015 I wrote about my Windows setup</a>, and now I think it's time for an update now that I'm on Windows full time. A lot of things have changed in the past year and while working on Windows before was pretty good, its even better now.</p>

<h2 id="windows-10-pro">Windows 10 Pro</h2>

<p>Yes, it might be watching everything you do, but I've upgraded all of my computers to Windows 10 Pro. Part of this was necessity, as Docker only works on Windows 10 Pro or higher, but Windows 10 itself also opens up the ability to run bash. If you are coming from Windows 7, there isn't much difference. Everything is just about the same, with a little bit of the Windows 8.1 window dressing still coming through sometimes.</p>

<p>Windows 10 Pro also affords me the ability to remote desktop into my machine. Yes, yes, I know that I can do that for free with macOS and Linux, but that's not the <em>only</em> reason to use Pro. Remote Desktop allows me to access my full desktop from my laptop or my phone. There's been a bunch of times where I'm away, get an urgent e-mail, and need to check something on our corporate VPN. I just remote desktop into my machine at home and I'm all set. This is much easier than setting up OpenVPN on my iPhone.</p>

<p>The main reason I run Windows 10 is Docker, which I'll outline below. The short of it is that Docker for Windows requires Hyper-V, and Hyper-V is only available on Windows 10 Pro or higher.</p>

<p>If you are running a PC and Windows, you should have upgraded. Nearly all the software that I had problems with works fine with Windows 10 now. Any issues I have are purely just because of how Windows handles things after I've gotten used to Linux.</p>

<h2 id="the-command-line---bash-and-powershell-wrapped-in-conemu">The Command Line - bash and Powershell wrapped in ConEmu</h2>

<p>Part of this hasn't changed. I still use Powershell quite a bit. I even give my Docker workshop at conferences from a Powershell instance. With the Windows 10 Anniversary Update, Powershell now works more like a traditional terminal so you can resize it! That sounds like a little thing, but being stuck to a certain column width in older versions was a pain. Copy and paste has also been much improved.</p>

<p>I still install <a href="http://git-scm.com/">git</a> and <a href="https://github.com/dahlbyk/posh-git">posh-git</a> to give the terminal experience I had using zsh and oh-my-zsh on Linux. Since Powershell has most of the GNU aliases installed for common commands, moving around is pretty easy and the switch to using Powershell shouldn't take long. Now, some things like <code>grep</code> don't work, so you will have to find alternatives ... or you could just use real <code>grep</code> using bash.</p>

<p>I also do all of my Docker stuff from within Powershell. The reasons for this are twofold - one is that it works fine in Powershell out of the box, and the second is that setting up Docker to work in bash is a bit of a pain.</p>

<p><a href="http://windows.php.net/">PHP</a> and <a href="https://getcomposer.org/download/">Composer</a>, which are daily uses for me, are also installed with their Windows variants. I do also run specific versions under Docker, but having them natively inside Powershell just saves some time. PHP just gets extracted to a directory (<code>C:\php</code> for me), and you just point the Composer installer to that. After that, PHP is all set to go.</p>

<p>The <a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide">Windows Subsystem for Linux (or bash)</a> is a must for me. This provides an environment for running Ubuntu 14.04 in a CLI environment directly inside of Windows. This is a full version of Linux, with a few very minor limitations, for running command line tools and development tools. I'm pretty familiar with Ubuntu already, so I just install things as I would in Ubuntu. I have copies of PHP, git, etc, all installed.</p>

<p>What I don't do is set up an entire development environment inside Ubuntu/bash, I leave that for Docker. Getting services to run like Apache can be a bit of a pain because of the networking stuff that happens between bash and the host Windows system. You <em>can</em> do it, I just chose not to.</p>

<p>I'll switch back and forth between bash and Powershell as needed.</p>

<p>I've also switched to using <a href="http://conemu.github.io/">ConEmu</a>, which is a wrapper for various Windows-based terminals. It provides an extra layer that allows things like tabs, better configuration, etc. I have it defaulted to a bash shell, but have added a keyboard shortcut to launch Powershell terminals as well. This keeps desktop clutter down while giving me some of the power that Linux/macOS-based terminals had.</p>

<h3 id="editing-files-in-bash">Editing files in bash</h3>

<p>One thing I don't do in bash is store my files inside of the home directory. When you install it, it sets up a directory inside <code>C:\Users\Username\AppData\Local\Lxss\rootfs</code> that contains the installation, and <code>C:\Users\Username\AppData\Local\Lxss\home\username</code> that contains your home directory. I've had issues with files being edited directly through those paths not showing up in the bash instance. For example, I don't open bash, <code>git clone</code> a project into <code>~/Projects</code>, and then open up PHPStorm and edit the files inside those paths. I'd perform the edits inside PHPStorm, save the file, and sometimes the edits showed up, sometimes they didn't.</p>

<p>Instead, I always move to <code>/mnt/c/Users/Username/</code> and do everything in there. bash automatically mounts your drives under <code>/mnt</code>, so you can get to the "Windows" file system pretty easily. I haven't had any issues since doing that.</p>

<h2 id="docker-for-windows">Docker for Windows</h2>

<p>Microsoft has done a lot of work to help <a href="https://www.docker.com/products/docker#/windows">Docker</a> run on Windows. While it is not as perfect as the native Linux version, the Hyper-V version is leaps and bounds better than the old Docker Toolbox version. Hyper-V's I/O and networking layer are much faster, and other than a few little quibbles with Powershell it is just as nice to work in as on Linux. In fact, I've been running my Docker workshop from Windows 10 for the last few times with as much success as in Linux.</p>

<p>It does require Hyper-V to be installed, so it's still got some of the same issues as running Docker Toolbox when it comes to things like port forwarding. You can also run Windows containers, though nothing I do day-to-day requires Windows containers, so my works is all inside Linux containers.</p>

<p>I would suggest altering the default settings for Docker though. You will need to enable "Shared Drives," as host mounting is disabled by default. I would suggest going under "Network" and setting a fixed DNS server. This helps resolve some issues when the Docker VM decides to just stop resolving internet traffic. If you can spare it, go under "Advanced" and bump up the RAM as well. I have 20 gigabytes of RAM on my desktop so I bump it up to 6 gigs, but my laptop works fine at the default 2 gigabytes.</p>

<p>All of my Docker work is done through Powershell, as the Docker client sets up Powershell by default. You could get this working under Bash as well by installing the Linux Docker Client (not the engine), and pointing it to the Hyper-V instance, but I find that's much more of a pain than just opening a Powershell window.</p>

<p>I run all of my services through Docker, so Apache, MySQL, etc, are all inside containers. I don't run any servers from the Windows Subsystem for Linux.</p>

<h2 id="phpstorm-and-sublime-text">PhpStorm and Sublime Text</h2>

<p>Nothing here has changed since 2015. <a href="https://www.jetbrains.com/phpstorm/">PhpStorm</a> and <a href="http://www.sublimetext.com/3">Sublime Text 3</a> are my go-to editors. PhpStorm is still the best IDE I think I've ever used, and Sublime Text is an awesome text editor with very good large file support.</p>

<h2 id="what-i%27m-not-using-anymore">What I'm Not Using Anymore</h2>

<p>A few things have changed. I've switched to using <a href="https://www.irccloud.com/">IRCCloud</a> instead of running my own IRC bouncer. It provides logging and excellent mobile apps for iOS and Android. It is browser-based and can eat memory if the tab is left open for days, but it saves me running a $5 server on Digital Ocean that I have to maintain.</p>

<p>puTTY, while awesome, is completely replaced for me with Powershell and Bash. Likewise, cygwin is dead to me now that I have proper Linux tools inside Bash.</p>

<p>I've also pretty much dropped Vagrant. At my day job we have to run software that isn't compatible with Virtualbox, and Docker on Windows works just fine now. I don't even have Vagrant installed on any of my machines anymore.</p>

<h2 id="it%27s-a-breeze">It's a Breeze</h2>

<p>Developing PHP on Windows is nearly as nice as developing on Linux or macOS. I'd go so far as to say that I don't have a good use for my Macbook Pro anymore, other than some audio stuff I do where I need a portable machine. I'm as comfortable working in Windows as I was when I was running Ubuntu or ArchLinux, even though I'd much prefer running a free/libre operating system. I've got to make money though, so I'll stick with Windows for the while.</p>

<h2 id="tl%3Bdr">tl;dr</h2>

<p>Here's what I use:</p>

<ul>
<li>Windows 10 Pro</li>
<li><a href="http://conemu.github.io/">ConEmu</a>

<ul>
<li>Powershell</li>
<li><a href="http://git-scm.com/">git</a></li>
<li><a href="https://github.com/dahlbyk/posh-git">posh-git</a></li>
<li><a href="http://windows.php.net/">PHP</a></li>
<li><a href="https://getcomposer.org/download/">Composer</a></li>
<li><a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide">Windows Subsystem for Linux/bash</a></li>
<li>git</li>
<li>vim</li>
<li>PHP</li>
<li>Composer</li>
</ul></li>
<li><a href="https://www.docker.com/products/docker#/windows">Docker</a></li>
<li><a href="https://www.jetbrains.com/phpstorm/">PhpStorm</a></li>
<li><a href="http://www.sublimetext.com/3">Sublime Text 3</a>.</li>
<li><a href="https://www.irccloud.com/">IRCCloud</a></li>
</ul>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[My PHP and Docker Workflow]]></title>
            <link href="/2016/07/27/my-php-docker-workflow/"/>
            <updated>2016-07-27T00:00:00+00:00</updated>
            <id>/2016/07/27/my-php-docker-workflow/</id>
            <content type="html"><![CDATA[<h2 id="my-docker-setup">My Docker Setup</h2>

<p>When it comes to Docker, I use <a href="https://docs.docker.com/compose/">Docker Compose</a>
to set up and link all of my containers together. It's rare that I have a
single container, though many of my <a href="https://sculpin.io/">Sculpin</a>-based
sites live quite comfortably inside of an nginx container, but even those
take advantage of volumes. For a basic three-tiered application, I start
off with this basic <code>docker-compose.yml</code> file:</p>

<pre><code># docker-compose.dev.yml
version: '2'

volumes:
  mysqldata:
    driver: local

services:
  nginx:
    image: nginx
    volumes:
      - ./:/var/www:ro
      - ./app/nginx/default.conf:/etc/nginx/conf.d/default.conf
    links:
      - phpserver

  phpserver:
    build:
      context: ./
      dockerfile: ./phpserver.dockerfile
    working_dir: /var/www/public
    volumes:
      - ./:/var/www/
    links:
      - mysqlserver

  mysqlserver:
    image: mysql
    environment:
      MYSQL_DATABASE: my_db
      MYSQL_USER: my_db
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: rootpassword
    volumes:
      - mysqldata:/var/lib/mysql

  composer:
    entrypoint: /bin/true
    build:
      context: ./
      dockerfile: ./composer.dockerfile
    volumes:
      - ./:/app
</code></pre>

<p>I tend to use the stock <a href="https://hub.docker.com/_/nginx/">nginx image</a> supplied on the <a href="https://hub.docker.com/">Docker Hub</a>,
as well as the official <a href="https://hub.docker.com/_/mysql/">MySQL image</a>. Both of these tend to work out of the box
without much extra configuration other than mounting some config files, like I do above for nginx.</p>

<p>Most of my PHP projects tend to need extensions, so I use the following Dockerfile for PHP:</p>

<pre><code>FROM php:fpm

RUN docker-php-ext-install pdo pdo_mysql

COPY ./ /var/www
</code></pre>

<p>It uses the stock FPM tag supplied by the <a href="https://hub.docker.com/_/php/">PHP image</a>, and I generally use the full-sized
version of the images. They do make available images built off of Alpine Linux which are much smaller, but I've had issues
with trying to build some extensions. I also have a COPY command here because this is the same Dockerfile I use for production,
on development this is a wasted operation.</p>

<p>The other thing I do is define a service for <a href="https://getcomposer.org/">Composer</a>, the package manager for PHP. The Dockerfile
for it mirrors the one for PHP, except it is built using the <a href="https://hub.docker.com/r/composer/composer/">composer/composer</a>
image and it doesn't copy any files into itself as it never goes into production.</p>

<pre><code>FROM composer/composer

RUN docker-php-ext-install pdo pdo_mysql
</code></pre>

<p>As is pretty standard, nginx links to PHP, and PHP links to MySQL.</p>

<p>With a <code>docker-compose -f docker-compose.dev.yml up -d</code> I can have my environment build itself and be all ready to go.</p>

<h3 id="why-the-composer-service%3F">Why the Composer Service?</h3>

<p>I'm a big fan of <a href="http://ctankersley.com/2015/12/23/dockerize-commands/">containerizing commands</a>, as it reduces the amount of
stuff I have installed on my host machine. As Composer is a part of my workflow, which I'll go over more in a minute, I build
a custom image specific to this project with all the needed extensions. Without doing this, I will have to run composer either
from my host machine directly, which can cause issues with missing extensions, PHP version mismatches, etc, or I have to run
composer with the <code>--ignore-platform-reqs</code> flag, which can introduce dependency problems with extensions.</p>

<p>Building my own image makes it simple to script a custom, working Composer container per project.</p>

<p>The <code>entrypoint: /bin/true</code> line is there just to make the container that Docker Compose creates exit right away, as there is
not currently a way to have Compose build an image but not attempt to run it.</p>

<p>The other thing you can do is download the PHAR package of composer, and run it using the PHP image generated by the project.</p>

<h2 id="custom-functions">Custom Functions</h2>

<p>I hate typing, so I have a few shell functions that make working with my toolchain a bit easier. I use both a Mac and ArchLinux,
so I standardized on using the <a href="http://www.zsh.org/">zsh</a> shell. This makes it easier to move my shell scripts from one machine
to another. Since I tend to run the PHP and Composer commands regularly, I have two functions I define in zsh that look to see
if there is an image available for the project I'm in, otherwise they default to stock images:</p>

<pre><code># ~/.zshrc
function docker-php() {
    appname=$(basename `pwd -P`)
    appname="${appname/-/}"
    imagename='php:cli'
    output=$(docker images | grep "${appname}_phpserver")
    if [ "$?" = "0" ]; then
        imagename="${appname}_phpserver"
    fi
    docker run -ti --rm -v $(pwd):/app -w /app $imagename php $*
}

function docker-composer() {
    appname=$(basename `pwd -P`)
    appname="${appname/-/}"
    imagename='composer/composer'
    output=$(docker images | grep "${appname}_composer")
    if [ "$?" = "0" ]; then
        imagename="${appname}_composer"
    fi
    docker run --rm -v ~/.composer:/root/.composer -v $(pwd):/app -v ~/.ssh:/root/.ssh $imagename $*
}
</code></pre>

<p>I can now run <code>docker-php</code> to invoke a PHP CLI command that uses a projects <code>phpserver</code> image, and <code>docker-composer</code> to do 
the same with Composer. I could clean these up, and probably will in the future, but for now they get the job done.</p>

<h2 id="a-general-workflow">A General Workflow</h2>

<p>By using Docker Compose and the custom functions, I'm pretty well set. I copy all of these files into a new directory, run
my <code>docker-composer</code> command to start requiring libraries, and I'm all set. If I need to use a skeleton project I will just
create it in a sub-folder of my project and move everything up one level.</p>

<p>For applications that are being built against one specific version of PHP, I end here, and I run my unit tests using the
<code>docker-php</code> function that I have defined. If I need to have multiple versions of PHP to test against, I'll make stub
services like I did with the <code>composer</code> service.</p>

<p>Any custom commands above and beyond this get bash scripts in the project.</p>

<h2 id="deployment">Deployment</h2>

<p>Deployment is always done on a project-by-project basis. I tend to package up the application in one image for the most
part, and then rebuild the application using the new images. How I do that depends on the actual build process being
used, but it is a combination of using the above Dockerfiles for PHP and/or Docker Compose and stacking config files with <code>-f</code>.</p>

<p>I skirt the whole dependency issue with Composer by normally running it with <code>--ignore-platform-reqs</code> on the build server,
mostly so I don't clog the build server with more images than I need, and so that I don't have to install any more extensions
than needed on the build server.</p>

<p>Either way, the entire application is packaged in a single image for deployment.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[Dockerizing Commands]]></title>
            <link href="/2015/12/23/dockerize-commands/"/>
            <updated>2015-12-23T00:00:00+00:00</updated>
            <id>/2015/12/23/dockerize-commands/</id>
            <content type="html"><![CDATA[<p>Back on December 10th, I launched by first book, <a href="https://leanpub.com/dockerfordevs">Docker for Developers</a>, on Leanpub.
One of the things that I kind of glossed over, mostly because it wasn't the focus of the book, was at the the beginning of
the "Containerizing Your Application" chapter. It was this:</p>

<blockquote>
  <p>Modern PHP applications do not generally tote around their
  vendor/ directory and instead rely on Composer to do our dependency injection. Let’s pull down
  the dependencies for the project.</p>

<pre><code>$ docker run --rm -u $UID -v `pwd`:/app composer/composer install
</code></pre>
  
  <p>This first initial run will download the image as you probably do not have this composer/composer
  image installed. This container will mount our project code, parse the composer.lock, and install our
  dependencies just like if we ran composer locally. The only difference is we wrapped the command
  in a docker container which we know has PHP and all the correct libraries pre-installed to run
  composer.</p>
</blockquote>

<p>There's something very powerful in there that I'm not sure many people take away from the book. I spend most of my time
showing how Docker is used and how to get your application into it, and the book answers the question which many people
have at the outset - how do I get my application into Docker?</p>

<p>One thing many people overlook is that Docker is not just a container for servers or other long-running apps, but it is a container
for any command you want to run. When you get down to it, that is all Docker is doing, just running a single command (well,
if done the Docker way). Most people just focus on long running executables like servers.</p>

<p>Any sort of binary can generally be pushed into a container and since Docker can mount your host file system you can
start to containerize any binary executable. In the Composer command above I've gotten away from having a dedicated
Composer command, or even phar, on my development machines and just use the Dockerized version.</p>

<p>Why?</p>

<p>Less maintenance and thinking.</p>

<p>Docker has become a standard part of my everyday workflow now even if the project I'm working on isn't running inside 
of a Docker container. I no longer have to install anything more than Docker to get my development tools I need. Let's 
take Composer for example.</p>

<h2 id="putting-composer-in-a-container">Putting Composer in a Container</h2>

<p>Taking a look at Composer, it is just a phar file that can be downloaded from the internet. It requires PHP with a few
extensions installed.</p>

<p>Let's make a basic Dockerfile and see how that works:</p>

<pre><code>FROM php:7

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

ENTRYPOINT ["composer"]
CMD ["--version"]
</code></pre>

<p>We can then build it with the following:</p>

<pre><code>docker build -t composer .
</code></pre>

<p>I should then be able to run the following and get the Composer version:</p>

<pre><code>docker run -ti --rm composer
</code></pre>

<p>Great! There's a problem though. Go ahead and try to install a few things, and eventually you'll get an error stating that
the zip extension isn't installed. We need to install and enable it through the <code>docker-php-ext-*</code> commands available in
the base image. It has some dependencies so we will install those through <code>apt</code> as well.</p>

<pre><code>FROM php:7

RUN apt-get update &amp;&amp; \
  DEBIAN_FRONTEND=noninteractive apt-get install -y \
    libfreetype6-dev \
    libjpeg62-turbo-dev \
    libmcrypt-dev \
    libpng12-dev \
    libbz2-dev \
    php-pear \
    curl \
    git \
    subversion \
  &amp;&amp; rm -r /var/lib/apt/lists/*

RUN docker-php-ext-install zip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

ENTRYPOINT ["composer"]
CMD ["--version"]
</code></pre>

<p>Now rebuild the image and try again. It will probably work. You won't have a vendor directory, but the command won't
fail anymore. We need to mount our directory inside of the container, which brings us back to the original command:</p>

<pre><code>docker run --rm -u $UID -v `pwd`:/app composer/composer install
</code></pre>

<p>That is a lot of stuff to type out, especially compared to just <code>composer</code>. Through the beauty of most CLI-based operating
systems you can create Aliases though. Aliases allow you to type short commands that are expanded out into much longer
commands. In my <code>~/.zshrc</code> file (though you might have a <code>~/.bashrc</code> or <code>~/.profile</code> or something similar) we can create
a new alias:</p>

<pre><code>alias composer="docker run --rm -u $UID -v $PWD:/app composer"
</code></pre>

<p>Now I can simply type <code>composer</code> anywhere from the command line and my <code>composer</code> image will kick up.</p>

<p>A better version can be found in the <a href="https://github.com/RobLoach/docker-composer/blob/php7/base/Dockerfile">Dockerfile for the PHP base image of composer/composer</a>
on Github, which I based the above on. In fact, I don't build my own Composer image, I use the existing one at 
<a href="https://hub.docker.com/r/composer/composer/">https://hub.docker.com/r/composer/composer/</a> since I don't have to maintain it.</p>

<h2 id="it-isn%27t-just-php-stuff">It isn't just PHP stuff</h2>

<p>Earlier today I sent out the following tweet after getting frustrated with running Grunt inside of Virtualbox.</p>

<blockquote class="twitter-tweet" lang="en"><p lang="en" dir="ltr">Next project - containerizing grunt and bower, because it takes too damn long for file changes to propagate into virtualbox</p>&mdash; Rogue PHP Engine (@dragonmantank) <a href="https://twitter.com/dragonmantank/status/679401741683720192">December 22, 2015</a></blockquote>

<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>It is a 
pain because some of Grunt's functionality relies on the filesystem notifying that a file has changed, and when Grunt
runs inside of a virtual machine and is watching a mounted folder (be it NFS or anything else other than rsync) it can
take up to 30 seconds for the notify signal to bubble up. That makes some slow development.</p>

<p>I hate polluting my work machine with development tools. I had a few people say they would love having Grunt and Bower
inside of a Docker container, so I did just that.</p>

<p>I created a new container called <a href="https://hub.docker.com/r/dragonmantank/nodejs-grunt-bower/">dragonmantank/nodejs-grunt-bower</a>
and pushed it up as a public repository on the Docker Hub.</p>

<p>Since these images are pre-built I don't have to worry about any dependencies they might need, and setting up a new
machine for these tools now is down to installing Docker (which is going to happen for me anyway) and setting up the 
following aliases:</p>

<pre><code>alias composer="docker run --rm -u $UID -v $PWD:/app composer/composer"
alias node="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower node"
alias grunt="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower grunt"
alias npm="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower npm"
alias bower="docker run -ti --rm -u $UID -v `pwd`:/data dragonmantank/nodejs-grunt-bower bower"
</code></pre>

<p>The first time I run one of the commands the image is automatically downloaded so I don't even have to do anything other
than just run the command I want.</p>

<h2 id="start-thinking-about-dockerizing-commands">Start Thinking about Dockerizing Commands</h2>

<p>Don't think that Docker is only about running servers or daemons. Any binary can generally be put inside of a container,
and you might as well make your life easier by making your tools easier to install and maintain.</p>
]]></content>
        </entry>
            <entry>
            <title type="html"><![CDATA[ZendCon 2015]]></title>
            <link href="/2015/10/23/zendcon-2015/"/>
            <updated>2015-10-23T00:00:00+00:00</updated>
            <id>/2015/10/23/zendcon-2015/</id>
            <content type="html"><![CDATA[<p>Another year, another ZendCon. I think I've been to every one since 2008, except for one where they moved it to San Jose for a year. Either way, it has become a staple conference that I look forward to each year, and this year was no exception.</p>

<p>ZendCon 2015 was held not in its normal home of Santa Clara, CA, but this time in Las Vegas. I think the years of attendees complaining about the lack of anything to do around the venue in Santa Clara, as well as the venue itself, helped push the idea that the conference should move. I didn't hate the old venue, but there was no good space to hang out near the conference rooms, and there was a huge lack of things to do unless you had access to a vehicle.</p>

<p>That said, it was never a big enough deal for me to not want to attend. I've been speaking at ZendCon since 2012 as well, which is awesome. They were the first conference to take a chance on me, and I've been eternally grateful.</p>

<h2 id="the-new-digs">The New Digs</h2>

<p>As I said, this year the conference was held in Las Vegas at the Hard Rock Hotel and Casino. While the hotel isn't directly on the Strip, it was close enough to get to the many attractions with either a taxi ride or a walk, depending on how much you love walking. The hotel itself had plenty of restaurants to chose from with excellent food, and there was obviously a casino there. The final night I spent an hour or two playing Blackjack with my friends <a href="https://twitter.com/jeremeamia">Jeremy Lindblom </a> and <a href="https://twitter.com/joepferguson">Joe Ferguson</a> and we had a great time.</p>

<p>As ombudsman of <a href="https://twitter.com/wurstcon">Wurstcon</a> I sanctioned a <a href="https://twitter.com/search?q=%23koshercon&amp;src=typd">#koshercon</a> event, which was a resounding success. Except for the ride there were our taxi (an SUV with a lower-pressure tire and brakes that needed changing) had to speed to catch the taxi of <a href="https://twitter.com/coderabbi">@coderabbi</a>, which was doing nearly 70mph. On the way back half of us somehow clown-car'd our way back in a Toyata Highlander.</p>

<p>Toyota, you should feel bad for saying your car seats six people.</p>

<p>My only major complaint, and I'd have this complaint probably at any casino, was the fact that the smoke was horrible. The only social places where people could sit were near the bars, and at this point in society where nearly everywhere is smoke free in public places, sitting in a smoky bar really grates on the eyes after a few hours.</p>

<h2 id="the-talks">The Talks</h2>

<p>I gave two talks - <a href="http://joind.in/talk/view/15534">Into the ZF2 Service Manager</a> and <a href="http://joind.in/talk/view/15586">Single Page Applications with Drupal 7</a>. The first talk went well, even if it was a nearly directly mirror of <a href="https://twitter.com/geeh">Gary Hockin's</a> talk. We both came up with the same talk somehow. The talk itself went over well, which I was happy for.</p>

<p>The second talk went well, but the audience just wasn't there. I mean that literally. I had two people, one from a Drupal shop and another that was interested in Single Page Apps. This was a problem for not only be, but other Drupal speakers in general. Getting Drupal devs to ZendCon will be an uphill battle, as its quite expensive compared to a DrupalCon ticket or any sort of Drupal/Bar Camp that is available. Hopefully next year ZendCon will be on more Drupal dev's radar.</p>

<p>The talks I attended were excellent as always. <a href="https://twitter.com/adamculp">Adam Culp</a> and his team at Zend picked a wonderful group of speakers that covered a vast range of topics. Each hour was filled with interesting talks, and it was hard to determine one to go to.</p>

<p>I can't mention a conference without talking about the hallway track, and there was a great hallway track. Attendees were open and wanting to talk to each other, and I had many great conversations with new and old friends.</p>

<h2 id="in-closing...">In Closing...</h2>

<p>I had a great time, and I look forward to next year. While I enjoyed the new venue, I know that there will be many people that will think twice about attending a conference in a casino like this again, especially one where there were not social areas devoid of smoke.</p>

<p>If you've never attended, I highly suggest you look at ZendCon next year, and I look forward to attending again.</p>
]]></content>
        </entry>
    </feed>