This concept really applies to much of life, however, computer science in particular.
Why is is that new ways of doing things can have remarkable success when they are at “the discovery stage” and end up being useful but disappointing in the field? Object oriented programming turns into spaghetti class interdependencies and deep hierarchies in day to day use. A giant leap forward and two steps back. Software patterns turn into misuse and abuse of the Singleton pattern and inappropriate shoehorning. Extreme programming becomes an excuse to write unmaintainable code that passes each and every unit test. Simplicity becomes an excuse to build things so bare bone as to become unusable. It seems like each new “promising technique” becomes troublesome once it goes mainstream. Still worth the trouble, but troublesome all the same.
You may find yourself scratching your head about why such issues aren’t discovered before they gain mass acceptance. Why when things are in “the R&D” phase they perform so well. Even when these same things are tested in limited circles on “real world problems” they continue to seem like the next silver bullet.
The answer unfortunately is as hard to swallow as it is simple. Early on the people developing and working with these new technologies are quite often some of the best in the field and the smartest guys out there. By the time the concept migrates to the other 90% of the coding world it finds itself slammed against the middle and trailing ugly end of the bell curve. These portions of the curve will often times misunderstand and misuse the new tools and techniques presented to them. It’s during these phases that the backlash usually kicks in. Guidelines and rules must be created to keep the average programmer from hanging himself with too much rope.
Often times new paradigms go through a few separate stages of development and acceptance. In stage 1 the really smart R&D types hit it. These people are actually great at creating new solutions that are both groundbreaking and elegant. Once these “conceptual entrepreneurs” have firmed things up it is ready for stage 2. In stage 2 the brightest of the “real world group” is brave enough to tackle integrating these concepts into production code. They become evangelists of the techniques they have had so much success with. Stage 3 involves more risk takers embracing the techniques advocated by stage 2. They pour over the concepts involved, grok them and integrate them into their code bases. It’s at stage 4 that the tide starts to turn. At stage four the middle of the bell curve is beginning to be pierced. The new mantra is gaining success. By stage 5 you have everyone accepting the new paradigm as “the way to go” based more on appeal to authority and peer pressure. At this stage a significant number may not understand the new techniques at all. They will advocate them but only graft them on top of their existing ways of doing things.
I like to think of myself as a stage 3 integrator. At this point things have hit my comfort zone and they are worth a try. I will do everything I can to understand a new technique, but by no means have created it or been the true dare devil integrating it at stage 2 into production code. Later down the pipe when I see these concepts misused and abused – I avidly follow the “shoring up” techniques to keep people from blowing body parts off my misusing a new technique.
No matter what the latest silver bullet appears to be, it is merely getting it justly deserved 15 minutes of fame during its integration period. These techniques will live on but the focus will move on. It’s important not to throw the baby out with the bath water. New techniques may disappoint after several years of growth – but in the end they often stay part of our process. Object orientation, design patterns, components, agile programming, the list goes on. Don’t forget that silver bullets are still bullets even when they tarnish. Just because a concept hasn’t completely lived up to its hype doesn’t make it a great and useful part of your process.