Evaluating the harm from closed source
When asking the question “When is it wrong (or right) to use closed-source software?”, we should treat it the same way we treat every other ethical question. First, by being very clear about what harmful consequences we wish to avoid; second, by reasoning from the avoidance of harm to a rule that is minimal and restricts peoples’ choices as little as possible.
In the remainder of this essay I will develop a theory of the harm from closed source, then consider what ethical rules that theory implies. Ethical rules about a problem area don’t arise in a vacuum. When trying to understand and improve them it is useful to start by examining widely shared intuitions about the problem. Let’s begin by examining common intuitions about this one...
The most fundamental harm we have learned to expect from closed source is that it will be poor engineering – less reliable than open source. I have made the argument that bugs thrive on secrecy at length elsewhere and won’t rehash it here. This harm varies in importance according to the complexity of the software – more complex software is more bug-prone, so the advantage of open source is greater and the harm from closed source more severe. It also varies according to how serious the expected consequences of bugs are; the worse they get, the more valuable open source is. I’ll call this “reliability harm”...
Yet another harm is that closed-source software puts you in an asymmetrical power relationship with the people who are privileged to see inside it and modify it. They can use this asymmetry to restrict your choices, control your data, and extract rent from you. I’ll call this “agency harm”. Closed source increases your transition costs to get out of using the software in various ways, making escape from the other harms more difficult. Closed-source word processors using proprietary formats that no other program can fully handle are the classic example of this, but there are many others. I’ll call this “lock-in harm”...
- Login to post comments