Several years ago, I worked with a company pursuing process improvement with the (then) brand new CMMI model.
To help us with this effort, we brought in a consultant from a consulting company with an "enterprise CMMI-5" offshore development center. At the time, I was the chair of the QA Comittee for Software Engineering, and my role was to be the permanent party and ensure continuity when, six weeks from now, the consultant when home.
I could fill up several blog posts with stories about that effort; it was very enlightening for me personally, and helped define some of my thoughts about software process, American Business, and human nature. I would like to share one nugget from this part of my life:
Walking in the hallway after the meeting, the consultant turned and said something to me like this:
"Yes, Matt, this seems to be a good team. With a little work I am certain we can accomplish our goal of zero defect software."
Oh, hey, wait. He never really did ask us our goal - he /told/ us our goal. I remember thinking at the time "What is that all about?"
"Zero Defect Software" rolls so quickly off the toungue. It sounds so easy. Like increased quality, reduced risk, and saving money, everyone should be for it, right?
Low defects, sure. Constant effort and improvement, absolutely. Zero defects, however, is really interesting. Where did that assertion come from, exactly?
I propose to you that the whole "zero defect software" thing is an ideology. That is to say, the speaker had an idea of how the world should
be - then went and made up a theory of how the world could be like that.
When people talk about "theory" in software process, and say things like "I am not interested in theory", I suspect they mean theory designed around ideology. If you dig far enough, you can find ideology in the CMM, in UML, In "Software Architecture", In COCOMO2, function points, and in many other fads that are slowly fading.
We can do better.
What if, instead of making up theories about how software projects should be run, we instead studied software projects to try to understand what actually works? What if, instead of trying to generalize, we studied what specifically happened in specific situations, so that next time we can say "hey, this reminds me of the time ..."
Do it often enough, and we start to slowly build a behavioral model about what is actually happening on the projects we are working on. You may not be able to articulate the model precisely, but when the PM says it'll be done on time, and you snort and reply "there's no way this is done before Christmas", or "Yes, it'll be done. Three weeks late. And Buggy" and you are right
- tell you that your mental model is effective.
Now based on that model, slowly, we can observe patterns of what actually works. Out of my experience in commercial software in North America, I have come to believe that what the customer wants will probably change - so today's working software is tomorrow's defect. That feedback is a critical part of the process, and that it's better to deliver a 20% solution today, 40% tomorrow, and so on that 100% at the end of the week.
I also believe in engaging the worker in the work - that the best 'processes' are defined by the workers themselves in order to do the job better, instead of being imposed with a goal of 'limiting variation.' Moreover, I believe in the power of choice in the workplace - and that standardization is rarely the way to get to that.
And then there's the value of a project-based team
working on delivering the software, instead of creating single function 'teams' where each member competes and all work on different software projects.
Likewise, because we communicate in symbols that can have subtlety different meanings for different people, there is a sort of information loss that happens in communications. In person, we can fight for it, by looking at a person's body language or asking them to repeat the idea back to us. The problem with the conversation is that the information is often lost, but we can reinforce it through daily standup meetings, pair programming, wikis, and other tools. Written documentation, on the other hand, is even more lossy - the original author has no idea if his writing came through, and no opportunity for feedback.
So how much to write, and how much to say, is a question of how much to invest - a trade off.
All of these are little things I have noticed by observing software teams in the wild. They may not fit a particular ideology well, but they are important, because by noticing them we can design a software system to have less loss and more self-correction. Likewise, we can walk into a development shop, listen, and provide some advice that might actually be helpful.
In a nutshell: Let's get experience first, and build our theory from there. Another term for this is constructionism
, or, as I first heard from my friend James Bach
, to be a "Software Process Naturalist."
That means instead of saying "You're doin' it wrong!" We say "Well, you could try that. And if you do that, I would expect you get these results. Now, are those the results you want?"
That's a kind of software theory I can, and do, get excited about. I've been doing it for years, and am just beginning to get to the point where I might consider formalizing it.
... but what do you think?