In late 2015/early 2016, I took over maintenance for the mtdowling/cron-expression library. This was a library that we used at my then day job quite heavily, as it was part of our daily processing and scheduling for customers around the world. It let us schedule cron jobs relative to them, instead of us, witout much work. When Michael reached out on Twitter for someone to help maintain it, I jumped at the chance.

For those that do not know what the library does, cron-expression just checks to see if a cron expression (something like 0 0 * * *) is valid, can check to see if it matches the current time and needs to run, and can determine future run dates. If you need a simple way to schedule things, cron itself is a very useful and well understood syntax. cron-expression does not run your code though, it is mostly a validation library.

Much like Sculpin, it's a pretty stable project so there wasn't a ton of movement on it development-wise. Some bug fixes here, few enchancements there, but nothing major. At the beginning of 2017 I pushed out a 1.2.0 release. I had decided that I would only support PHP 7.0 going forward. By this time I had learned that Laravel was using this library under the hood, so I wanted to get one final release done under the older PHP 5.x branch. v2.x and later would all be PHP 7.x compatible.

Then I started digging into a bug, and that bug turned into a few bugs, in regards to validation. As it turns out, the regex that the library used was really loose and let a lot of stuff through. This did not seem to affect valid expressions, but it allowed a lot of junk through. As time went on more and more reports with this started coming through. The underlying logic had to change. I started working on this as the main focus.

Then I got this bug report - #153, "Wrong nextRunDate for * rules". Long story short, step ranges in the library were broken. Someone had discovered a bug in Laravel's cron system that caused the expression to validate on the incorrect set of months. Even I misunderstood how that worked, so I ended up diving into the source code for cronie, one of the main cron systems shipped with Linux systems.

cron-expression had gotten our implementation completely wrong. I re-implemented a bunch of our validation logic to the same basic way that cronie does, and this actually ended up not only fixing our stepping issue but also our data validation woes. The new code was a big more compact and more unified in how the library does validations. Overall this was good.

There was a big problem though, and that was that this was a huge backward compatibility break. When a bug has survived long enough and people rely on that behavior, it is no longer a bug - it's a feature. So our bad stepping fix has the potentional for breaking a huge number of systems, even if that behavior is bad. People rely on it.

Sufficient time has past for a v2.0.0 release. That will be happening today. All the fixes will be available in packagist as soon as it updates.

The repo will also be moving to a new repository: dragonmantank/cron-expression. The reasons for this are twofold - one, I am not and cannot be the admin of the original repo as it is not an organization, it is a personal repo. I cannot wire in new build or checking systems at all. Two, this is the perfect time to do this break. v2.0.0 is incompatible with v1.0.0 because of the stepping issue, and this will let frameworks or other installs that rely on the v1.x branch to move at their leisure without breaking them.

The old repo will no longer be maintained, but I will still watch the issues. The existing issues will still be evaluated and looked at, just in the new repo. The old package will remain in packagist for those that need it. All new work will be done in the new repo against the 2.x branch.

If anyone has any questions, feel free to hit me up on twitter at @dragonmantank.

A few days ago at our family dinner I talked about how Alexa Bliss was setting off Amazon Echos during her matches. This is a slightly funnier, and less expensive, version of the TV Report prompts Amazon Echos to buy dollhouses story. I showed my wife a video of how the commentators were saying her name over and over, and an Echo was responding.

My youngest son said it would be cool to have one, and asked if we could get one. I said no. My wife and I are on the same page about this, but the idea of a device, which I have no control over, listening to everything being said is not something we want in our house. It's not just me not liking the Amazon Echo, either - I don't want a Google Home in the house either.

That lead to a discussion about why having a listening device in the home is bad. We expect a certain amount of privacy in our own home regardless of the fact that we are not doing anything against the law. I just do not want my private conversations overheard by a device that sends all of that back to a server, where it sits forever. Police have already tried to get Echo recordings for a murder, though if Amazon is to be believed unless someone said "Alexa, help me!" nothing should have been recorded. Even if it had recorded something, Amazon states that such voice recordings are encrypted.

Knowing how well software is built and how often "encrypted" data gets accessed means I do not want my words recorded and stored on Amazon's, or anyone else's, servers. Hell, I work for a company who designs and sells a network appliance to find bad traffic on networks. When someone has access to servers or the network, getting access to information is trivial. Amazon now also sells Echo Look, which is a camera that currently helps you dress fashionably. I do not even have to talk about how creeped out that would make me feel.

We grow increasingly reliant upon companies that make our lives more convenient. I've used Google's e-mail, calendaring, and document storage services for years because it was easy to use, worked directly with my phone, and meant I did not have to worry about e-mail. There are some nice perks to that, like online document editing, having airline data directly parsed and made available, intelligent spam filtering, and device syncing, all to name a few.

If I do not want my speech hosted on Amazon or Google servers... why my textual life hosted and sifted through by Google?

Taking Back My E-mail

The first thing I've decided to move off of Google, and back into my own control, is my e-mail.

I have a lot of e-mail addresses, and I have been attempting to consolidate them into just a few. Google made that pretty easy. I'm grandfathered into the old G Suite setup of it being free for 100 users, but I took liberal advantage of domain aliases and catchall e-mail addresses.

I looked at services like FastMail, ProtonMail, and Kolab Now. All three of them are highly regarded, with Kolab and ProtonMail being open source projects. Moving my domains and setting up aliases though, that would end up being very, very costly. Kolab charges around $50 for just setting up a single domain alias. FastMail and ProtonMail would start to get very pricy as I moved all my domains over.

ProtonMail also lost points as I would have to use a web browser on my desktop. I want my e-mail in any app of my chosing. I am not paranoid enough to think someone is trying to get into my e-mail, so the security aspect of ProtonMail was not a huge selling point.

I decided to host my own e-mail.

Running My Own Server

"Email is one of the bastions of the decentralised Internet and we should hang onto it" - Nux, Hacker News

I'm not afraid of servers or their maintenance at all. My career started with maintaining servers and dealing with configuring them, so why not just run my own e-mail server?

I know, I know. I should not run my own e-mail server because:

  • There are lots of moving parts
  • It's not just e-mail, its virus scanning, spam filtering, e-mail access
  • Maintenance is time consuming
  • Blacklist maintainers are cold, heartless beings that never remove IPs
  • Russians will hack me
  • E-mail isn't secure
  • I have to trust my host

Frankly, most of the above is FUD. If we, as developers, are telling people to run things like Docker or set up their own VPS because "it's the right way to run a web app," then running an e-mail server should not be some scary thing. Granted, I am not going into this blind as I've set up an e-mail server before, but come on people. It isn't that bad.

I do want to cut down on the amount of work I have to do. I first looked at Mail-in-a-Box, which is a set of scripts that sets up a mail server. I decided against it as it is pretty much all or nothing. You run and set up the box the way it wants to be set up and that's it. Want to do something else with the box? Too bad.

I then found sovereign. It is a set of Ansible playbooks that set up a server that includes e-mail as one of the various services. Since it is just based on Ansible configuration and I know how to work with that, I decided on sovereign.

Setting up the Server

The Server

I use Digital Ocean for a lot of projects. As I said before, privacy from foreign powers is not a current concern I have so hosting a server in the US is fine for the moment. I created a VPS with Debian 8 as that was what sovereign recommended.

The next thing I did was check the assigned IP on http://multirbl.valli.org/. This site will check a bunch of well used DNS blacklists to see if the IP that Digital Ocean gave me has had a shady history. The first one... well, once it hit twenty blacklists I deleted the VM and rebuilt it on a different server.

The second one was only on four blacklists. That is a much more manageable number to deal with. Most blacklists are fairly easy to get removed from, and if I'm only on four I will take my chances.

With that sorted out I followed through the rest of the instructions in the sovereign README file. It took only a few minutes of prep before running the Ansible playbooks.

I started off with a domain that did not previously have e-mail associated with it, to test things out. That way if it all went to Hell I wouldn't lose any e-mail. Ran the scripts and after about 15 minutes ran out of memory on the server.

I tried to work around it, but with everything running 512mb was not big enough. I deleted the server and reprovisioned a bigger one. Not only did it have more memory, it also had more hard drive space.

That worked better. About 20 minutes later I had a server up and running!

Shutting down Services

sovereign comes with a bunch of services installed, and since this was my first run through I let it install everything. Once I confirmed everything was working well, I SSH'd into the server and disabled a bunch of stuff I did not need, like ZNC. I happily pay IRCCloud for IRC bouncing.

Most servers are compromised because of services running on the box. It is rare that an actual OS exploit is the problem. I removed the services I did not need from the site.yml file, and shut down services I did not need.

I did want to keep the webmail so I just disabled a bunch of vhosts as well. So far so good.

Multiple Domains

sovereign actually makes it pretty simple to set up multiple domains on a single install. group_vars/sovereign houses all of the domains and accounts you want to set up. Adding a second domain was a simple as adding a new entry under mail_virtual_domains, and the associated accounts under mail_virtual_users.

Another Ansible run, and my legit domains I wanted to move off of Google were all set up. I tested logging in via Evolution, the e-mail client that comes with GNOME and what I use on my desktop and laptop. Auto config did not work, but I manually set up IMAP+ with no issues. I could send e-mail to and from accounts without a problem.

That left me figuring out how to get catchall e-mail addresses to work. There was an open issue on the Github project, so I dug around a bit. sovereign uses a Postgresql-backed e-mail system for the users, so finding how to do catchall addresses was a bit of a pain. Turns out it is really hard and not well documented. This wasn't a problem with sovereign, but postfix itself.

I found instructions for how to do it at https://workaround.org/ispmail/wheezy/connecting-postfix-to-the-database. I created a new file at roles/mailserver/templates/etc_postfix_pgsql-email2email.cf.j2 and modified the Ansible scripts to use it per the instructions on workaround.org.

Another Ansible deploy, and I tested it from my old Hotmail address.

I did not get my e-mails.

Checking the logs I was getting greylisting errors. Turns out Hotmail/Outlook.com get flagged quite regularly for spam, so my server was greylisting them. I added the following to /etc/postgrey/whitelist_clients and restarted postgrey:

# Outlook.com
104.47.0.0/17
40.107.0.0/16
/.*outbound.protection.outlook.com$/
/outlook/

I sent another e-mail, and my catchall started working! Well, technically, it was working before, just my greylist service was slowing Outlook.com down.

Moving from Google

After all my testing, I was ready. I went into my DNS providers and added the needed DKIM, DMARC, and MX records to point to my new server. I waited about fifteen minutes, as the TTL on all the records was 900 seconds, and tried to send an e-mail. It showed up in my new inbox.

I actually started recieving legitimate e-mail as well. I noticed some, like e-mails from Twitter, were coming in about 2 hours later than their timestamp. Quick look at the logs showed I'm greylisting Twitter's servers as well. Everything was working though, as grey listing is a normal part of day-to-day e-mail. If I'm greylisting someone and it's important, there are many other ways to get in touch with me ASAP.

I have years worth of e-mail sitting in GMail though. I wanted to move all of that over.

After some searching I came across imapsync, which is an open source tool that syncs mail from one IMAP server to another. I followed the directions at http://blog.jgrossi.com/2013/migrating-emails-using-imap-imapsync-tofrom-gmail-yahoo-etc/ on compiling and setting it up on my Ubuntu 17.04 desktop.

I then followed the directions at https://imapsync.lamiral.info/FAQ.d/FAQ.Gmail.txt for syncing from GMail to my local server. I settled on the following command to run:

imapsync \
           --host1 imap.gmail.com \
           --ssl1 \
           --user1 me@googlehostedemailaddress.com \
           --password1 p@ssw0rd \
           --authmech1 plain \
           --host2 mail.newmailserver.com \
           --ssl2 \
           --user2 me@googlehostedemailaddress.com \
           --password2 n3wp@ssw0rd \
           --useheader="X-Gmail-Received" \
           --useheader "Message-Id" \
           --automap \
           --regextrans2 "s,\[Gmail\].,," \
           --skipcrossduplicates \
           --folderlast  "[Gmail]/All Mail"

GMail has a 2.5GB limit on mail transfer per day, but I was below that limit. I fired up the command and was immediately shut down by Google. They consider PLAIN authentication an insecure way to authenticate (for good reason), but they provided a link and explanation. I followed the directions and ran the command again.

Nearly 48 hours to download all of the e-mail. It worked though. I started to see all of my folders and e-mail show up in my new server.

With that, I was off of Google's mail servers.

Security Concerns

E-mail is not secure. It was never designed to be. Even running something like ProtonMail, which touts it's encryption, does nothing to encrypt e-mails once it leaves their servers. Anyone can sniff e-mail on the wire. That's the nature of e-mail.

What is a concern is authentication, and access to the box.

SSH access is locked down to key-based authentication. No users have passwords. sovereign also sets up fail2ban, which should stop any brute force attacks. I'll probably supplement that with ossec. I should be able to get that installed with a new Ansible role.

For any virtual hosts on the machine as well as IMAP, sovereign sets up Let's Encrypt for SSL certificates, as well as scripts to renew them when needed. sovereign sets up Roundcube for web mail, which is protected with this, and any new subdomains it activates will be protected as well (with the appropriate changes to Ansible).

E-mail access and sending require authentication. Most servers get blacklisted due to the lack of authentication on the sending portion. Authentication is set up by default with sovereign, and all of the authentication happens over SSL/TLS.

My only main job is to update the base OS and packages every so often. I think I'm pretty well set up other than that.

Step One Completed

It's been a few days now and so far so good. The only hard thing thus far was setting up the catchall addresses. I'm getting e-mail on my laptop, desktop, and phone without an issue. I've tested sending mail to different services and so far have not been blocked. The e-mail transfer from GMail to the new server has been taking a while, but it's pretty hands off once it starts.

I am not totally off of Google yet. Next step is to move all of my calenders, which I believe I can do with ownCloud. ownCloud is an open source file, calendar, and contacts, storage/sharing service that gets installed as part of sovereign. ownCloud should actually handle both moving my calendar from Google Calendar, but also my files from Google Drive.

I also have a few patches that I want to clean up and send to sovereign. One nice one is the catchall setup, but then I've also been working with the Ansible scripts a bit to make it smaller to run. By default it runs all the tasks, but for something like adding a single e-mail address that means a 15-20 minute run.

So far I've been impressed with sovereign. I'd highly suggest looking into it if you want to run your own server.

This was originally published on Medium.com

If you aren’t familiar with how Open Source came to be the way it is today, please read “A History of Open Source,” which is effectively Part 1 of this small series of posts.

“younger devs today are about POSS — Post open source software. fuck the license and governance, just commit to github.” — James Govenor

There are two basic licensing camps in the Open Source world — the world of copyleft and the GPL, and the permissive realm of BSD/MIT. Since 2000, a shift has been made toward permissive licensing.

Is one better than the other? If so, why?

The trend does seem to indicate that the current development environment is favoring developer ease-of-use for code (permissive licensing) over a requirement of code sharing (copyleft). The general idea of permissive licensing is to make the Developer’s life easier, but what if there was an even more permissive license than permissive licenses?

“Empowerment of individuals is a key part of what makes open source work, since in the end, innovations tend to come from small groups, not from large, structured efforts. ” — Tim O’Reilly

As with everything, the internet change how we shared code.

Github did what no other source code sharing system did, and that was make it easy to share code. Now before you jump down my throat, let me clarify. While there had been Sourceforge for open source projects, and Google Code for sharing code, neither were that great, let alone for a new developer getting started.

Github made it easy for anyone to throw code up on a website, and made it easy to get that code down to your machine. They invested in teaching people to use git and made the case for why you should use them. They made open source project hosting free.

For many years Github actively made the decision to not enforce a license on code that was uploaded as open source repositories. Github left it up to the maintainer to sort that out. 80–90% did not bother with a license. That is even after a change in 2013 where Github decided to start asking about licensing when new projects were created.

“Software is like sex: it’s better when it’s free.” — Linus Torvalds

“All information should be free” has been a tenant of hackers since the 1960’s. Instead of restricting usage of code, why not just make it free? Completely free?

There has been a recent trend toward the idea of releasing software under much more lax licenses that veer more toward Public Domain than they do an established Open Source license. In some extreme cases code is being released without any license as to how it can be used, under the assumption that no license is the same as Public Domain.

The driving force behind this idea is “I don’t care what you do with my code.” It’s a noble idea that hearkens back to the 1960s. Code does not need all of these rules around sharing and usage, just take my code and do what you want.

There are even licenses that support this, due to the way that copyright works. Licenses such as WTFPL (Do What the Fuck You Want to Public License) and the DBAD (Don’t be a Dick) Public License are designed to get out of the nitty-gritty thinking when it comes to sharing code — here is code, just use it.

The First Fallacy — No License is OK

“Linux is not in the public domain. Linux is a cancer that attaches itself in an intellectual property sense to everything it touches. That’s the way that the license works.” — Steve Ballmer

Licensing is restrictive no matter which camp you are in, and by making licenses you make it harder to integrate software. For example, the company you work for probably will not use GPL software for fear of having to release the source code of their flagship product which contains many proprietary ideas and business rules.

In the US, copyright is automatically assigned. There isn’t a special form you have to send into the government, when you create something copyright is assigned to you, or whomever hired you to do the work. There are things you can do to further prove that you are a copyright holder, but simply publishing code online marks you as the copyright holder. Created works do not automatically go into the public domain anymore.

Copyright holders hold all the cards. Just because you can see the source code for a piece of software doesn’t mean you can use it without repercussion, just like finding a $100 bill on the ground doesn’t automatically make it yours.

We live in a world controlled by copyright, and until such a time as copyright laws change, releasing software without a license is a dangerous move, even potentially more dangerous than other licenses.

Unless you have something in your hand that says you are allowed to use the software, you are right back at an AT&T Unix situation. Otherwise the copyright holder can pick up their ball and go home, or worse, sue you for using their software.

The Second Fallacy — Lax Licenses are Open Source

”From my point of view, the Jedi are evil!” — Anakin Skywalker

The current development landscape very much carries a “Fuck It, Ship It” attitude. It is a core mentality of many tech startups and developers. Getting an MVP out and validated is more important than wasting time thinking about licensing. We are developers that use open source tools so we feel the need to give back, so we release what code we can.

In an ideal world you might just release your software as Public Domain, but there are many countries that do not recognize public domain, and public domain has different definitions depending on where you are. You need some sort of licensing.

In a world where Public Domain is not really a good thing to release code under, we end up with these licenses that absolve the original developer from putting restrictions on the code.

  • Don’t Be a Dick
  • Do What The Fuck You Want
  • Don’t Be Evil

Developers do not want to have to mess around with licensing. Public domain is not a viable choice. “I just want to release code.” Developers ended up coming up with very lax software licenses where they basically say they don’t care what you do with the code.

Public Licenses are also littered vague concepts, like the DBAD. What defines being a dick? Who defines it? While there are examples in the licenses, DBAD even says that it is not limited to the examples given. What happens when someone decides you are being a dick with their software when you don’t think you’re being a dick? Douglas Crockford famously added “The Software shall be used for Good, not Evil” to the MIT license used for JSMin. Who determines what is evil?

These lax licenses are coming from a good place, and the people that come up with them are not ignorant or stupid people. The only problem is that the legal system doesn’t like vague concepts, and from a business standpoint vague definitions can really put you in a bad spot if someone decides you are being a dick, or doing something evil.


Developers that are fed up with licenses and procedure and bureaucracy are, in my mind, ignoring sixty years of history in computing. The “Just Ship It” attitude and the “Just Commit It” culture of many groups feeds into this idea that the early MIT hackers would have loved — make the software available and good things will come of it.

As humans though, we screw it up. We tried sharing software without licensing and, honestly, that did not end up working out. Hell, we cannot even agree on how software should be shared. Should be be copy-left? Should it be permissive? Can’t I just give it away?

Open Source licenses were chosen because they had been vetted and have the legal verbiage to make their usage cases safe (permissive or copyleft). While it might suck to have to put a license on something, sometimes the right, and safe, thing to do is suck it up and spend thirty seconds deciding if you want a permissive license or a copyleft license.

Saying that we are beyond Open Source and the need for licenses is just a lie developers are telling themselves when they don’t want to think about what happens to their code. You created it, take thirty seconds make sure that the code is released properly and will be used properly.

Go to https://opensource.org/licenses/alphabetical and take a look at the licenses that are available. There are many out there, as well as the venerable GPL and BSD licenses. If that list is daunting, check out http://choosealicense.com/ from Github.

Don’t ignore sixty years of history.