Worse is Better

TL;DR: Worse protocols and programming languages tend to succeed because of their simpler implementation and evolutionary characteristics. They’re released much faster which gives them the chance to be adopted and improved much faster than better designed technologies.

I was reading Where Have all the Gophers Gone? Why the Web beat Gopher in the Battle for Protocol Mind Share and the most interesting part was this one:

Successful open-source development can be understood by Richard Gabriel's claim that "the right thing" design approach, tends to lose to "worse is better." That is, developers should release their software to the public early in the process in order to gain adherents and then let a larger development community make improvements to the code. The Gopher team put too much burden on themselves for providing innovations to the protocol, servers software, etc. They decided that they would decide what the "right thing" was, instead of unleashing these decisions on others….This had the practical shortcoming of significantly reducing the number of people who could be working to make necessary changes to Gopher.

Clay Shirky in his essay In Praise of Evolvable Systems says:

HTTP and HTML are the Whoopee Cushion and Joy Buzzer of Internet protocols, only comprehensible as elaborate practical jokes. For anyone who has tried to accomplish anything serious on the Web, it's pretty obvious that of the various implementations of a worldwide hypertext protocol, we have the worst one possible.

He thinks that evolvable systems like this are adapted and extended in a thousand small ways in a thousand places at once rather than get designed once by experts. He says Centrally designed protocols start out strong and improve logarithmically. Evolvable protocols start out weak and improve exponentially.

A very interesting part from the “Oral History of Butler Lampson” interview that proves this point:

Richard Gabriel in his post Lisp: Good News Bad News argues that there’re two schools of design, the New Jersey approach and the MIT approach and that the New Jersey approach tend to win because it has better survival characteristics and its implementation simplicity. And he thinks that Unix and C are an example of this.

The MIT apporch:

  • Simplicity -- the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
  • Correctness -- the design must be correct in all observable aspects. Incorrectness is simply not allowed.
  • Consistency -- the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
  • Completeness -- the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.
  • The New Jersy approach (the worse-is-better philosophy):

  • Simplicity -- the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
  • Correctness -- the design must be correct in all observable aspects. It is slightly better to be simple than correct.
  • Consistency -- the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
  • Completeness -- the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.
  • He thinks that the things just have to be in a position where it spreads like a virus by doing something valuable to users with a minimum implementation, he says:

    It is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable. Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing. In concrete terms, even though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better.

    Gwern Branwen in his essay Bitcoin Is Worse Is Better thinks that Bitcoin is a perfect example of the worse-is-better approach, he says:

    Guarantees of Byzantine resilience? Loosely sketched out and left for future work. Incentive-compatible? Well… maybe. Anonymity? Punted on in favor of pseudonymity; maybe someone can add real anonymity later. Guarantees of transactions being finalized? None, the user is just supposed to check their copy of the blockchain. Consistent APIs? Forget about it, there’s not even a standard, it’s all implementation-defined (if you write a client, it’d better be “bugward compatibility” with Satoshi’s client). Moon math? Nah, it’s basic public-key crypto plus a lot of imperative stack-machine bit-twiddling. Space efficiency? A straightforward blockchain and on-disk storage takes priority over any fancy compression or data-structure schemes. Fast transactions? You can use zero-conf and if that’s not good enough for buying coffee, maybe someone can come up with something using the smart contract features. And so on.

    Other examples of winning technologies with the worse-is-better approach are Javascript, the x86 architecture, and windows registry. Linus Torvalds thinks that there’s no way you’re going to design something better than what you’re going to get from trial-and-error with a feedback cycle and I think this is the main reason that the worse-is-better approach works.

    The last thing I want to include is a slide from Gabriel’s talk Models of Software Acceptance that summarize how this can be put to a theory for developing new technologies: