/* Given the amount of energy the software industry spends on Quality it is amazing how elusive it remains (amazing as in stupefying, not awesome) . High quality software is not only hard to achieve but surprisingly hard to define.

For me, quality describes the fitness of the software to resolve the problem it was designed to deal with. Fitness in the Darwinian sense: how well is the individual optimized to the environment it lives in. Quality is also measured by ruthless Darwinian standards: is the software being used (life) or are users doing everything they can not to (death).

Every little detail of the user experience affects its survivability. Quality is not limited to the number of bugs or UI usability. It is also influenced by price and business model (providing the user has to deal with it), deployment method and even intangible properties such as coolness or humor.

Quality has such a broad span because there is no clear cut between a crashing app and the intangible values the software radiates. They are all affected directly by the way the software was designed and implemented.

Take for instance Firefox and Opera (34.5% vs 1.9% market share). They are both free browsers but Firefox is open-source and Opera is not. This is not just a business decision; it goes to the core of the system and affects every aspect of its design, development, feature set, even the way it is distributed.
You could argue that the end-user doesn't care if the browser she is using is open source or not. My argument is that the set of values Firefox is associated with and radiates affects its user experience far more than the way it implements tabbed browsing (given they all have some sort of tabs ... seven!!! it took you seven versions?!?).

Natural Selection as an indicator of quality also leads to some surprising revelations. Consider Microsoft Word (stay with me here). The software everyone loves to hate is by far the most widespread word cruncher out there . This makes it (admittedly by definition) the highest quality solution for word processing in the current environment. Sure its expensive, buggy, unpredictable, user hostile and altogether annoying. However, it is, at the moment, the optimal solution to word processing! This means that the aforementioned qualities are offset by others such as interoperability with other office tools, bundled apps, rich feature set and most importantly current spread and resistance to change (which falls in nicely with the evolution analogy).

This doesn't mean that dominating software guarantees its "safety" but rather that potential competitors will need a significant advantage in quality for the population to tilt towards them. For this reason, there is a good chance, Word's worst enemy is not OpenOffice or Google's hosted solution but rather new versions that tinker with old conventions. Combined with a changing environment (web apps, free alternatives, global warming) this might spell the end of the dominant specie (or not).

But why is it that software quality is so hard to predict it needs an after-the-fact definition? What is the big difference between a mail app and a toaster oven (excluding software embedded ovens that could land the space-shuttle)?

The common reason given, is that software development is complex and a bit fuzzy around the edges. Despite attempts by the academic world and big software houses, it is as close to an engineering discipline today as my late grandmother's cooking. Sure, it yielded good, fairly repeatable, results but was intuition-based and employed some informal methods, which if published could capture the FDA's attention (or some other three lettered agency).

Well, that's part of the story. The other part is the complex ecosystem in which software lives. Just as the optimal length of the cheetah's leg is impossible to calculate, there is no way to calculate the optimum point between Power and Simplicity or Speed and Scalability: there are just too many variables. As an example, take the classic Google vs. Yahoo approach: Only in retrospect is it possible to know that in the search-engine domain, users prefer a simple one-text-box solution to a powerful hundreds-of-useful-links page. Enough so, to challenge the dinosaurs of the time (contrary to the belief of many professional analysts).

So what is the point of these earth shattering insights? First, that great software will continue to rely on programmers with highly developed intuition and will not be replaced with "process" or tools in the foreseeable future. Second, that it is deceptively hard to judge software quality until it's too late. But maybe most important is the realization that I have lately exceeded the recommended dose of Dawkins writings.

public class U extends Person {
   public void introduce(Meme meme) throws Feedback {
      Person origin = meme.getOrigin();
      if (idiots.contains(origin)) {
         //todo: implement a more diplomatic approach
         throw new Feedback("You're an idiot");
      if (cutThroughTheCrap(meme)) {// A bit crude but highly optimized
         if (origin!=this) { // It happens :-(
         throw new Feedback("Let me think about this");

No comments: