This was originally published on Medium.com

If you aren’t familiar with how Open Source came to be the way it is today, please read “A History of Open Source,” which is effectively Part 1 of this small series of posts.

“younger devs today are about POSS — Post open source software. fuck the license and governance, just commit to github.” — James Govenor

There are two basic licensing camps in the Open Source world — the world of copyleft and the GPL, and the permissive realm of BSD/MIT. Since 2000, a shift has been made toward permissive licensing.

Is one better than the other? If so, why?

The trend does seem to indicate that the current development environment is favoring developer ease-of-use for code (permissive licensing) over a requirement of code sharing (copyleft). The general idea of permissive licensing is to make the Developer’s life easier, but what if there was an even more permissive license than permissive licenses?

“Empowerment of individuals is a key part of what makes open source work, since in the end, innovations tend to come from small groups, not from large, structured efforts. ” — Tim O’Reilly

As with everything, the internet change how we shared code.

Github did what no other source code sharing system did, and that was make it easy to share code. Now before you jump down my throat, let me clarify. While there had been Sourceforge for open source projects, and Google Code for sharing code, neither were that great, let alone for a new developer getting started.

Github made it easy for anyone to throw code up on a website, and made it easy to get that code down to your machine. They invested in teaching people to use git and made the case for why you should use them. They made open source project hosting free.

For many years Github actively made the decision to not enforce a license on code that was uploaded as open source repositories. Github left it up to the maintainer to sort that out. 80–90% did not bother with a license. That is even after a change in 2013 where Github decided to start asking about licensing when new projects were created.

“Software is like sex: it’s better when it’s free.” — Linus Torvalds

“All information should be free” has been a tenant of hackers since the 1960’s. Instead of restricting usage of code, why not just make it free? Completely free?

There has been a recent trend toward the idea of releasing software under much more lax licenses that veer more toward Public Domain than they do an established Open Source license. In some extreme cases code is being released without any license as to how it can be used, under the assumption that no license is the same as Public Domain.

The driving force behind this idea is “I don’t care what you do with my code.” It’s a noble idea that hearkens back to the 1960s. Code does not need all of these rules around sharing and usage, just take my code and do what you want.

There are even licenses that support this, due to the way that copyright works. Licenses such as WTFPL (Do What the Fuck You Want to Public License) and the DBAD (Don’t be a Dick) Public License are designed to get out of the nitty-gritty thinking when it comes to sharing code — here is code, just use it.

The First Fallacy — No License is OK

“Linux is not in the public domain. Linux is a cancer that attaches itself in an intellectual property sense to everything it touches. That’s the way that the license works.” — Steve Ballmer

Licensing is restrictive no matter which camp you are in, and by making licenses you make it harder to integrate software. For example, the company you work for probably will not use GPL software for fear of having to release the source code of their flagship product which contains many proprietary ideas and business rules.

In the US, copyright is automatically assigned. There isn’t a special form you have to send into the government, when you create something copyright is assigned to you, or whomever hired you to do the work. There are things you can do to further prove that you are a copyright holder, but simply publishing code online marks you as the copyright holder. Created works do not automatically go into the public domain anymore.

Copyright holders hold all the cards. Just because you can see the source code for a piece of software doesn’t mean you can use it without repercussion, just like finding a $100 bill on the ground doesn’t automatically make it yours.

We live in a world controlled by copyright, and until such a time as copyright laws change, releasing software without a license is a dangerous move, even potentially more dangerous than other licenses.

Unless you have something in your hand that says you are allowed to use the software, you are right back at an AT&T Unix situation. Otherwise the copyright holder can pick up their ball and go home, or worse, sue you for using their software.

The Second Fallacy — Lax Licenses are Open Source

”From my point of view, the Jedi are evil!” — Anakin Skywalker

The current development landscape very much carries a “Fuck It, Ship It” attitude. It is a core mentality of many tech startups and developers. Getting an MVP out and validated is more important than wasting time thinking about licensing. We are developers that use open source tools so we feel the need to give back, so we release what code we can.

In an ideal world you might just release your software as Public Domain, but there are many countries that do not recognize public domain, and public domain has different definitions depending on where you are. You need some sort of licensing.

In a world where Public Domain is not really a good thing to release code under, we end up with these licenses that absolve the original developer from putting restrictions on the code.

  • Don’t Be a Dick
  • Do What The Fuck You Want
  • Don’t Be Evil

Developers do not want to have to mess around with licensing. Public domain is not a viable choice. “I just want to release code.” Developers ended up coming up with very lax software licenses where they basically say they don’t care what you do with the code.

Public Licenses are also littered vague concepts, like the DBAD. What defines being a dick? Who defines it? While there are examples in the licenses, DBAD even says that it is not limited to the examples given. What happens when someone decides you are being a dick with their software when you don’t think you’re being a dick? Douglas Crockford famously added “The Software shall be used for Good, not Evil” to the MIT license used for JSMin. Who determines what is evil?

These lax licenses are coming from a good place, and the people that come up with them are not ignorant or stupid people. The only problem is that the legal system doesn’t like vague concepts, and from a business standpoint vague definitions can really put you in a bad spot if someone decides you are being a dick, or doing something evil.


Developers that are fed up with licenses and procedure and bureaucracy are, in my mind, ignoring sixty years of history in computing. The “Just Ship It” attitude and the “Just Commit It” culture of many groups feeds into this idea that the early MIT hackers would have loved — make the software available and good things will come of it.

As humans though, we screw it up. We tried sharing software without licensing and, honestly, that did not end up working out. Hell, we cannot even agree on how software should be shared. Should be be copy-left? Should it be permissive? Can’t I just give it away?

Open Source licenses were chosen because they had been vetted and have the legal verbiage to make their usage cases safe (permissive or copyleft). While it might suck to have to put a license on something, sometimes the right, and safe, thing to do is suck it up and spend thirty seconds deciding if you want a permissive license or a copyleft license.

Saying that we are beyond Open Source and the need for licenses is just a lie developers are telling themselves when they don’t want to think about what happens to their code. You created it, take thirty seconds make sure that the code is released properly and will be used properly.

Go to https://opensource.org/licenses/alphabetical and take a look at the licenses that are available. There are many out there, as well as the venerable GPL and BSD licenses. If that list is daunting, check out http://choosealicense.com/ from Github.

Don’t ignore sixty years of history.

Posted on 2017-05-07

Comments


This was originally published on Medium.com

The world of computers is an odd place. In the span of my own lifetime, I’ve gone from not owning a computer because it was too expensive to owning a watch that has more computing power than the first computer I ever owned. The amount of computing power in my house is mind boggling when I think about it compared to twenty years ago.

Software, too, has evolved. I started off with DOS, then switched to Windows 3.1. I never personally owned a modern Mac until a few years ago, but used them throughout school. There was always the PC vs Mac rivalry but I didn’t care for the most part. I used a PC because it played games. That was up until I found Linux.

Somewhere around 2000, I was at a book store and came across a boxed set for Linux Mandrake. I think it was something like fifty dollars and I had enough cash for it. I installed it on a second machine I had and was amazed.

I quickly ran into problems running it and had to search for help online. I started to learn about sharing source code, how to patch and recompile programs, and this whole world of sharing code. The GPL made all of this possible.

This GPL thing intrigued me though. Here was this document that told me I was allowed to modify and share the source code to software as long as I made my changes public. That all made sense. If something didn’t correctly I should be able to fix it and let other people know of the fix. I could not do that with Windows, or Microsoft Office, or Photoshop on the Macs at school.

Why did you need this documentation, this proof that I was allowed to do this and not get in trouble?

That’s the world we live in.

How did we get here?

“All Information should be free” — Steven Levy, “Hackers: Heros of the Computer Revolution”, on the Hacker Ethics

In 1956, the Lincoln Laboratory designed the TX-0, one of the earliest transistorized computers. In 1958 it was loaned to MIT while Lincoln worked on the TX-2.

The TX-0 amazed the early computer hackers at MIT. It didn’t use cards, and it wasn’t cloistered away like the hulking behemoth of a machine from IBM that most people at MIT programmed against. You typed your program onto a ribbon of thin paper, fed it into the console, and your program ran.

Most importantly the TX-0 was not nearly as guarded as the holy IBM 704. Most of the hackers were free to do what they wanted with the machine. There was one problem, and it was somewhat of a large on — the TX-0 had no software.

So the hackers at MIT created what they needed.

Most of the software was kept in drawers and when you needed something, you reached in and grabbed it. The best version of a tool would always be available, and anyone could improve it at any time. Everyone was working to make the computer and the software better for everyone else.

“All information should be free” was a core tenant of the hacker culture at MIT. No one needed permission to modify the software as everyone was interested in making the software, and thereby the TX-0, better.

As the machines changed and the software changed, this ethos did not. Software would be shared and changed to work on many different types of hardware, and improvements were added over time. Needed the latest copy? Just ask for it. Need to fix it? Just fix it.

“To me, the most critical thing in the hobby market right now is the lack of good software courses, books, and software itself. […] Almost a year ago, Paul Allen and myself, expecting the hobby market to expand, hired Monte Davidoff and developed Altair BASIC. […] The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however. 1) Most of these “users” never bought BASIC […]” — Bill Gates, “An Open Letter of Hobbyists”

Fast forward to 1976. Computers have left the halls of universities that had the physical space needed in the 50’s and 60’s to house them and are entering people’s homes. They aren’t necessarily like the computers we have today, but all computers need software.

The ideals that the hacker culture at MIT did not change as it spread westward and as these computers invaded the lives of hobbyists. What has changed is the business around computers, and like anything when it comes to humans, there is always money to be made.

“All information should be free” reared it’s head when the tape containing Altair BASIC disappeared from a seminar put on by MITS at Rickey’s Hyatt House in Palo Alto, California. Why? Ed Roberts, the “father of the personal computer” and the founder of MITS (Micro Instrumentation and Telemetry Systems) had decided to not give the Altair BASIC software to customers for free and instead charged $200 for the ability to write software.

For better or for worse, copies of Altair BASIC started appearing and being shared.

The landscape of computers and software development was changing. You no longer had one or two giant machines sitting in a university that had paid staff who could write software for them. With the TX-0 at MIT, it did not cost them anything extra to make and distribute software because there was no downside — there was not any money being exchanged. Just increases in workflow (and better gaming).

By the 1970s the need for software was seeing an ideological shift .Up until this point the creation of software was paid for indirectly by the universities and companies that needed it. Since most software was built by university developers, it was infused with the academic idea of sharing knowledge. Now software developers were seeing the need to develop generic software that many people would need to use.

That costs money, because developers have themselves and their families to support.

“Those who do not understand UNIX are condemned to reinvent it, poorly” — Henry Spencer

The 1970s also saw the development of the Unix operating system developed at AT&T by Ken Thompson, Dennis Ritchie, and others. Much like the original tools built by the hackers at MIT on the TX-0, Unix grew as it was licensed to other companies and universities.

Unix was alluring because it was portable, handled multiple users and multi-tasking. Standards help people develop software, and Unix became one of those standards. Before this was Multics for the GE-645 mainframe, but it was not without its faults.

AT&T, however, was not allowed to get into the computer business due to an antitrust case that was settled in 1958. Unix was not able to be sold as a product, and Bell Labs (owned by AT&T) was required to license its non-telephone technology to anyone who asked.

Ken Thompson did just that.

Unix was handed out with a licenses that dictated the terms of usage, as the software was distributed in source form. The only people who had requested Unix were ones that could afford the servers, namely universities and corporations. The same entities that were used to just sharing software.

The open nature of Unix allowed researchers to extend Unix as they saw fit, much as they were used to doing with most software. As fixes were developed or things were improved, they were folded into mainstream Unix.

The University of California in Berkeley was one of the most sought-after versions of the Unix code base, and started distributing their own variant of BSD in 1978, known as 1BSD, as an add-on to Version 6 Unix.

There was a hitch though. AT&T owned the copyright to the original Unix software. As time went on AT&T used software from projects outside of themselves, including the Computer Sciences Research Group from Berkeley.

Eventually AT&T was allowed to sell Unix, but their commercially available version of Unix was missing pieces that were showing up in the Berkeley variant, and BSD tapes contained AT&T code which meant users of BSD required a usage license from AT&T.

The BSD extensions were what we would eventually call “Open Source,” in a permissive sense. BSD was rewritten to remove the AT&T source code, and while it maintained many of the core concepts of and compatibility with the AT&T Unix, it was legally different.

Much like with Bill Gates and Micro-Soft’s (eventually Microsoft) Altair BASIC, we start to see the business side of software start to conflict with the academic side of software, or more it conflicting with the hacker idea of software.

We also see one of the first true Open Source licenses come from this, which distinctly grants the end user special rights on what they can and can’t do with the software. Unix had its own license which up until commercialization (and a growing market) had been fairly liberal, but BSD wanted to make sure that Unix would be available to whomever needed it.

“Whether gods exist or not, there is no way to get absolute certainty about ethics. Without absolute certainty, what do we do? We do the best we can.” — Richard Stallman

In 1980, copyright law was extended to include computer programs. Before that most software had freely been shared or sold on a good faith basis. You either released your software for everyone to use as public domain, or you sold it with the expectation that someone wouldn’t turn around and give it away for free.

Richard Stallman was, and probably is, one of the last true Hackers from the MIT era. In a sort of hipster-y kind of way he yearned for the time when software could be free, not shackled by laws or corporations. In a sense, software was meant to be shared and wanted to be shared. “All information should be free.”

Stallman announced the GNU project in 1983, which was an attempt to create a Unix-compatible operating system that was not proprietary. NDAs and restricted licenses were antithetical to the ideals of free software that he loved.

The Free Software Foundation was founded in 1985, and along with it came the idea of “copyleft.” Software was meant to be free, and GNU Manifesto shared his ideas on GNU project and software in general. Whether you agreed with it or not, the GNU Manifesto was a fundamental part of what we now consider Open Source.

Stallman then conglomerated his three licenses, the GNU Emacs, the GNU Debugger, and the GNU C Compiler, into a single licenses to better serve software distribution — the GPL v1, in 1989.

The release of the GPL, the release of a non-AT&T BSD Unix, and the flood of commerical software of the 80’s and 90’s, lead us to where we are today, and are the three major ideals that exist:

  • Software should always be free — Copyleft
  • Software should be easy to use and make the developers lives easier — Permissive
  • Software should be handled as the creator sees fit — Commercial

“younger devs today are about POSS — Post open source software. fuck the license and governance, just commit to github.” — James Govenor

Since 2000, a shift has been made toward permissive licensing. One would argue that the GPL is dying. One could argue that developers are more interested in helping themselves than the actual idea of free software.

There is no denying that there are two camps when it comes to Open Source software, with the crux of the problem being exactly what software is supposed to be, or do for us.

The GPL says it should be free. In a way, Software is a living, breathing thing that wants to have the freedom to become the best possible piece of Software. It cannot do that when it can be locked up, chained, and held back from the passions that developers have for making software better. You, the end user, are better because Software can be changed to make everyone’s lives better, and you are better because you can change the Software.

The other camp is more pragmatic in a way. Permissive licensing wants software to be free because that helps Developers. You, the Developer, release software to make people’s lives better. You, the Developer, are more interested in knowing that people can use your software or code in a way that they see fit. The end user is better because the Developer had the freedom to change the software to make everyone’s lives better.

Is one better than the other?

Or should we just throw it all to the wind and ignore sixty years of computer history and forget about licenses?

Posted on 2017-01-04

Comments


Recently, with the new Macbook refresh for 2016, many developers have taken a good hard look at whether or not they want to stick with macOS and the hardware. Traditionally Macbook Pros have been an excellent kit to use, and even I used one for travel up until earlier this year. They had powerful CPUs, could be loaded with a good amount of RAM, and had unparalleled battery life. The fact that macOS was built on a Unix subsystem also helped, making it easier for developer tools to be built and worked with thanks to the powerful command line interface.

The new hardware refresh was less than stellar. All jokes aside about the Touch Bar, it was not the hardware refresh many of us were looking for. While it does mean that 2015 models might be cheaper, if you are looking for a new laptop, is it time to possibly switch to another OS?

Linux would be the closest analogue in terms of how things work, but not all hardware works well with it. You will also lose a lot of day-to-day software, but the alternatives might work for you. If you are looking at a new OS, I'd heavily look at Linux on good portable hardware like a Thinkpad X or T series laptop. Even a used Thinkpad (except the 2014 models) will serve you well for a long time if you go the Linux route.

Up until about August, I ran Linux day-to-day. My job change meant that I had to run software that just did not work well under Linux, so I switched back to Windows. Back in 2015 I wrote about my Windows setup, and now I think it's time for an update now that I'm on Windows full time. A lot of things have changed in the past year and while working on Windows before was pretty good, its even better now.

Windows 10 Pro

Yes, it might be watching everything you do, but I've upgraded all of my computers to Windows 10 Pro. Part of this was necessity, as Docker only works on Windows 10 Pro or higher, but Windows 10 itself also opens up the ability to run bash. If you are coming from Windows 7, there isn't much difference. Everything is just about the same, with a little bit of the Windows 8.1 window dressing still coming through sometimes.

Windows 10 Pro also affords me the ability to remote desktop into my machine. Yes, yes, I know that I can do that for free with macOS and Linux, but that's not the only reason to use Pro. Remote Desktop allows me to access my full desktop from my laptop or my phone. There's been a bunch of times where I'm away, get an urgent e-mail, and need to check something on our corporate VPN. I just remote desktop into my machine at home and I'm all set. This is much easier than setting up OpenVPN on my iPhone.

The main reason I run Windows 10 is Docker, which I'll outline below. The short of it is that Docker for Windows requires Hyper-V, and Hyper-V is only available on Windows 10 Pro or higher.

If you are running a PC and Windows, you should have upgraded. Nearly all the software that I had problems with works fine with Windows 10 now. Any issues I have are purely just because of how Windows handles things after I've gotten used to Linux.

The Command Line - bash and Powershell wrapped in ConEmu

Part of this hasn't changed. I still use Powershell quite a bit. I even give my Docker workshop at conferences from a Powershell instance. With the Windows 10 Anniversary Update, Powershell now works more like a traditional terminal so you can resize it! That sounds like a little thing, but being stuck to a certain column width in older versions was a pain. Copy and paste has also been much improved.

I still install git and posh-git to give the terminal experience I had using zsh and oh-my-zsh on Linux. Since Powershell has most of the GNU aliases installed for common commands, moving around is pretty easy and the switch to using Powershell shouldn't take long. Now, some things like grep don't work, so you will have to find alternatives ... or you could just use real grep using bash.

I also do all of my Docker stuff from within Powershell. The reasons for this are twofold - one is that it works fine in Powershell out of the box, and the second is that setting up Docker to work in bash is a bit of a pain.

PHP and Composer, which are daily uses for me, are also installed with their Windows variants. I do also run specific versions under Docker, but having them natively inside Powershell just saves some time. PHP just gets extracted to a directory (C:\php for me), and you just point the Composer installer to that. After that, PHP is all set to go.

The Windows Subsystem for Linux (or bash) is a must for me. This provides an environment for running Ubuntu 14.04 in a CLI environment directly inside of Windows. This is a full version of Linux, with a few very minor limitations, for running command line tools and development tools. I'm pretty familiar with Ubuntu already, so I just install things as I would in Ubuntu. I have copies of PHP, git, etc, all installed.

What I don't do is set up an entire development environment inside Ubuntu/bash, I leave that for Docker. Getting services to run like Apache can be a bit of a pain because of the networking stuff that happens between bash and the host Windows system. You can do it, I just chose not to.

I'll switch back and forth between bash and Powershell as needed.

I've also switched to using ConEmu, which is a wrapper for various Windows-based terminals. It provides an extra layer that allows things like tabs, better configuration, etc. I have it defaulted to a bash shell, but have added a keyboard shortcut to launch Powershell terminals as well. This keeps desktop clutter down while giving me some of the power that Linux/macOS-based terminals had.

Editing files in bash

One thing I don't do in bash is store my files inside of the home directory. When you install it, it sets up a directory inside C:\Users\Username\AppData\Local\Lxss\rootfs that contains the installation, and C:\Users\Username\AppData\Local\Lxss\home\username that contains your home directory. I've had issues with files being edited directly through those paths not showing up in the bash instance. For example, I don't open bash, git clone a project into ~/Projects, and then open up PHPStorm and edit the files inside those paths. I'd perform the edits inside PHPStorm, save the file, and sometimes the edits showed up, sometimes they didn't.

Instead, I always move to /mnt/c/Users/Username/ and do everything in there. bash automatically mounts your drives under /mnt, so you can get to the "Windows" file system pretty easily. I haven't had any issues since doing that.

Docker for Windows

Microsoft has done a lot of work to help Docker run on Windows. While it is not as perfect as the native Linux version, the Hyper-V version is leaps and bounds better than the old Docker Toolbox version. Hyper-V's I/O and networking layer are much faster, and other than a few little quibbles with Powershell it is just as nice to work in as on Linux. In fact, I've been running my Docker workshop from Windows 10 for the last few times with as much success as in Linux.

It does require Hyper-V to be installed, so it's still got some of the same issues as running Docker Toolbox when it comes to things like port forwarding. You can also run Windows containers, though nothing I do day-to-day requires Windows containers, so my works is all inside Linux containers.

I would suggest altering the default settings for Docker though. You will need to enable "Shared Drives," as host mounting is disabled by default. I would suggest going under "Network" and setting a fixed DNS server. This helps resolve some issues when the Docker VM decides to just stop resolving internet traffic. If you can spare it, go under "Advanced" and bump up the RAM as well. I have 20 gigabytes of RAM on my desktop so I bump it up to 6 gigs, but my laptop works fine at the default 2 gigabytes.

All of my Docker work is done through Powershell, as the Docker client sets up Powershell by default. You could get this working under Bash as well by installing the Linux Docker Client (not the engine), and pointing it to the Hyper-V instance, but I find that's much more of a pain than just opening a Powershell window.

I run all of my services through Docker, so Apache, MySQL, etc, are all inside containers. I don't run any servers from the Windows Subsystem for Linux.

PhpStorm and Sublime Text

Nothing here has changed since 2015. PhpStorm and Sublime Text 3 are my go-to editors. PhpStorm is still the best IDE I think I've ever used, and Sublime Text is an awesome text editor with very good large file support.

What I'm Not Using Anymore

A few things have changed. I've switched to using IRCCloud instead of running my own IRC bouncer. It provides logging and excellent mobile apps for iOS and Android. It is browser-based and can eat memory if the tab is left open for days, but it saves me running a $5 server on Digital Ocean that I have to maintain.

puTTY, while awesome, is completely replaced for me with Powershell and Bash. Likewise, cygwin is dead to me now that I have proper Linux tools inside Bash.

I've also pretty much dropped Vagrant. At my day job we have to run software that isn't compatible with Virtualbox, and Docker on Windows works just fine now. I don't even have Vagrant installed on any of my machines anymore.

It's a Breeze

Developing PHP on Windows is nearly as nice as developing on Linux or macOS. I'd go so far as to say that I don't have a good use for my Macbook Pro anymore, other than some audio stuff I do where I need a portable machine. I'm as comfortable working in Windows as I was when I was running Ubuntu or ArchLinux, even though I'd much prefer running a free/libre operating system. I've got to make money though, so I'll stick with Windows for the while.

tl;dr

Here's what I use:

Posted on 2016-11-13

Comments