Doc says that it's not really Good vs Evil, but Mono vs Poly. Doc lays out the case for diversity in our networks, especially in the form of compatible, but different, implementations.
I'll talk more about Evil in another post. Here is yet another set of ideas to add to the pot:
Second source semiconductors
The "Poly" concept that Doc refers to is widely present in the semiconductor industry -- second source.
The nature of semiconductors is that a specialized design is produced in sufficiently large quantities to recover both the design, and the production costs. There are high barriers to entry in the production end of this market, with recent chip fabrication facilities have price tags in the billions of dollars. It is the low incremental costs of mass production that drive the profits of this industry.
Because of the large set-up costs, chips are usually produced in a single large batch sufficient to meet market demand as deemed appropriate by the producer. All of this combines to result in a market which tries to limit the number of designs to the ones that turn out to be most profitiable.
The nature of the market automatically brings the availabilty of any chip into doubt. Other factors which can make things even worse incude fiscal failure of the producers, excessive market demand (competing buyers driving the price too high), and outright disaster.
To reduce the risks for customers, and thus to help sell chips, Suppliers of chips often make legal agreements with other suppliers to provide a backup source of chips of a given design. This then become a good selling point for vendors to use, often explicitly stating where to find the second source. (Usually at the same price)
When the chips are purchased as components to be built into some product, the costs of any changes to the final product makes them very expensive to substitute. If any component can't be acquired, large costs are incurred while seeking a suitable replacement. It is thus natural that a significant portion of the design cost of a new product goes into making sure that all of the components will be available in a timely and cost-effective manner. Engineers thus tend to seek and specify chips with second sources.
The second source then helps to insure availability of a given component. It also helps in another way which is not quite as obvious, troubleshooting. If a given design is found to work well with one suppliers chips, it becomes possible to track down exactly where the fault lies, saving time and energy.
As the semiconductor industry matured, vendors recognized the benefits of second sources, and the value they gave to their customers. It became (and remains) commonplace to see the term "second source" in sales literature, and specification sheets.
Second source software
In the computing world, there are very few second sources. The barrier to production is essentially zero, as anyone can make another copy of a program. Thus, unlike the hardware world, a second source software is that of a second design, as opposed to a second producer.
There are good reasons to avoid second sourcing, and it's cousin "forking". The costs of software are all in development, thus a second source is twice as much work. It is highly unlikely that someone will branch off from a project (fork) in order to produce different code, with the same set of features. It's entirely rational that we would all want to single source as much as possible... until the resulting monoculture brought certain risks with it.
The number one driver for this discussion (from my limited perspective) is the vulnerable nature of our computing infrastructure. Most of our systems all come from a few sources. It's not uncommon to find that a flaw in a widely used library may result in a vunerabilities across a vast number of systems.
So, the incentives are beginning to appear for second sources of software. It's going to take a long time before things start to show up. It's even possible that another solution to the problem can be found which doesn't require such an investment.
The open source movement plays a part in this picture as well. Open source projects result in a product with NO production costs. The design costs have all been absorbed by the contributors to a given project. The availabilty of source means that literally anyone (even the user) can be considered a second source. In terms of debugging, the user can then delve into parts of the picture that would otherwise be hidden, fix problems, and become a new, and source. So, in this fashion, Open Source is partially equivalent to a software second source.
Prospects for the future
As the market learns, it may eventually make business sense for even the biggest vendors to have some form of second sourcing, but I see this as unlikely soon. (However, if there is money in it, businesses can spin on a dime)
For some users, Linux is a suitable second source. If you're not constrained to Windows-only applications, then you can swap operating systems, and go on with life. The rest of us will bear the costs, and as a result the market as a whole will seek out second sources in the long run.
So, you can see... I agree with Doc, mono is bad, poly is good.
Thanks for your patience, and attention