I've made no secret of my disappointment in the Software Engineering Institute's Capability Maturity Model
It's not good work. I'm sorry, it's just not good work.
The thing is, when I talk to people about the CMMI, they often like it because it tells them what to do, or at least gives a list of areas to work on. In other words, they believe the CMMI road is a cookbook; just follow the formula and you'll be fine. It's easy
So some people like easy answers, and, when I take them away, they sometimes don't know what to do at all. Example: At one agile transition, I had some analysts basically tell me "Matt, we know the requirements template is horrible, but what will we do without it?"
So in addition to deflating the CMMI, it would be nice if I replaced it with something, right?
Now, as a context-driven thinker, I wouldn't want to provide an easy answer. After all, what works for an aerospace engineering firm might fail for an Insurance Company or a bank - and vice-versa. But I can provide tools to help the reader figure out what the right solution is.
In other words, a model.
For several years now, I've been thinking about a model for software development. The thing is, I want it to be good
If you think about it for a few minutes, even providing analysis tools is a non-trivial undertaking. You have to:
* Boil down the core issues in software development, separating the essence from the accident
* Measure sift and weigh them,
* Organize them, not according to a random list, but, ideally, according to a meaningful list, one that helps us understand our work in a deeper way,
* Explain them in a way that doesn't involve loss (when I say aircraft, do I mean 747 or helicopter?) yet doesn't get bogged down in 600 pages of detail,
* Can be understood by the practitioner in industry, the college senior, the professor, and the lay executive or manager alike
* Won't be misunderstood (good luck with that one - but I can try)
* Won't put anyone to sleep - the readers have to finish what they've started
The organization point is one that is especially important to me. Programming, Estimating, Specifying, Communicating and Testing are all things we do when we develop software - but what is the relationship between them? Winston Royce spent most of his famous paper
drawing different models of these activities. For the most part, he ended up with a diagram where every activity could point to every other activity. (No really, check out Figure 10 on the last page).
Yet every now and again, a great scientist comes up with a theory that explains and predicts how stuff works. Thanks to Newton's work, we can predict the movement of the stars in the sky.
Another example - when Mendeleev invented the periodic table
, it didn't just provide us a list --- it help us predict different behavior in materials, which recur periodically. Using this periodic table, scientists could predict materials previously unknown, just because there was a hole in the table
. This little picture that can fit on a sheet of paper is no simple chart; it reveals part of the nature of matter. In other words, the periodic table was not invented at all, it was ... discovered.
Can we do something like that for software?
Oh, I know, ambitious, and a perfect job is likely impossible. But wouldn't it be great, to borrow a line from my friend Rick Scott, if we could just make strides in the right direction and declare victory?
I've had this idea percolating in my head for ten years now. During that time, I was walking around, actively studying how companies do business, what works and why. I went to graduate school at night for computer information systems, started blogging, became a contributing editor at Software Test&Performance Magazine
, teach Information Systems at night, and I've read the success literature. Heck, I read the failure literature - and I developed, tested, and managed a whole lot of projects along the way.
Suddenly I realized: I may actually be qualified to write this - or at least to combine the work of Weinberg
, and Goldratt
into something a little more specific. (You may see some of them peeking out of the model; If you have specific references to suggest, I can add them.)
It's January, 12, 2010. I've finally put pen to paper. This is the first step - a small step - in defining an actually good
model for software. It is a start, and struggles from all the challenges I listed above. I would like your help in moving it from a thought experiment to something useful, and from there to something worth publishing.
I think what I have now is good enough for a blog post, so let's get started:
The Heusser Software Performance Model - v 0.0.1
To assess the potential for success of a software team, look at these key factors. To improve it, improve these factors:
How tight is the customer-facing feedback loop?
I see two or three main feedback loops in software development. The first is between the developer and his code; how often does he compile or test his units? The longer the delay, the less the developer will remember what he was thinking, and the more errors will creep in. A second feedback loop occurs between time-to-develop and time for some sort of critical inspection (testing) at the customer level. The third is between the code that is actually developed and the requirements; in agile terminology, "how long are your iterations?" Too tight and the developers will be whipshawwed, never getting anything done. Too loose and you've deliver exactly what the customer asked for ... but not what he needs.
Now, the length of the feedback loop needs to take the nature of the project into account. It's unlikely that a systems upgrade project is going to be delivered to production every two weeks, or possibly every two months. So the evaluator has to ask: How tight is the feedback loop to the customer, is that tight enough, and what can we do to improve it?
How constraining is the main constraint?
Let's say, for instance, you meet with every member of the development team and go through every work item, and the answer you get is the same: "Oh, I'm waiting on requirements for that", "Oh, I'm waiting on test results for that", "Oh, Sally needs to get back to me on that ..." You walk about the cubicles and you see technical staff twiddling thumbs. Oh, it's not their fault, they insist. They're blocked
Something is wrong.
I'd say the work in progress queue likely hasn't had enough thought put into it, yet the human-natural tendency is not to figure it out, but to pile the staff on with more work that is ill-defined and unclear. Or, perhaps, the analysis or test group is swamped; perhaps two senior testers quit and the test team simply cannot keep up with the developers.
In any event, there is a capability mis-match between the specialties, and it's causing some groups to sit idle while, hopefully, another group is running around trying to un-block them. Scrum and Agile methods solve this with the idea that staff members should be able to wear more than one hat; when a bottleneck emerges, the team shifts to get that bottleneck un-stuck. (Another technique that can really help is the big visible chart displaying who needs to be working on what piece; I've seen behavior shift rapidly when a group realizes that it is the bottleneck.)
Then again, it's possible that every team
is sitting idle, which leads to the next problem.
Does the system encourage people to actually do work?
Take a look at the most hard-charging team members; the "go-to" guys, the people actually doing the technical things. Do they get the best seat at the table - or are they rewarded with more work on the weekends? Talk to the last three people to get promoted and observe how they work. What are they working on? Is it the critical success factors of the team, a pet project, something political? Observe how those people are perceived by the team. Look for the solution 'hubs'; the people who know what is going on and work to solve the problems directly. Then ask management their perception of those people. Look for inconsistencies. Look for what behaviors are actually rewarded
; are those the right ones?
What is the quality of the communication?
Are the team members actually talking to each other? Do the different specialties actually seek to understand? Or do people retreat to positions and assumptions? It's awfully hard to get any software delivered when "not looking bad" is the critical success factor, or when loyalty becomes more important than it should be. When work becomes a political situation, and fighting for petty rewards becomes more important than actually getting things done, it can become extremely hard to ship software.
The alternative, of course, is an organization that doesn't have room for such political infighting, like a three-person startup, or one where the whole team actually believes a rising tide will float all boats. A quick way to assess the quality of communications is to look at the last three people that got promoted, find out what they do all day, and what they are like to work with.
Can a new employee jump in?
The CMM (and later CMMI) had a slot called "training program", and that's not a terrible idea. Leet me ask instead - assume you hire someone generally competent in at least half of your technology stack. How long does it take them to contribute in some meaningful way? Or do they wait two weeks for a computer, three for a login, four to get rights to use the version control system. Is the code a horrible mess that has to be explained by the one guy who actually understands it? Are newbie testers told to "test this" without any significant background in how we do it here, or do they start with a couple of days of pairing with a senior tester?
Applying the questions
A typical way of 'assessing' software projects is to throw all the team leads in a room and ask them a bunch of questions. I'm suggesting the opposite: Walk around and see what is actually happening on the team floor. Ask to meet the newest employee; talk about what his new hire experience was like. Meet with developers and ask to see what they are working on. If they tell you they are not blocked; ask to watch them work, or pair with them. Does the conversation get uncomfortable? It is because of a lack of trust? That tells you about the quality of the communication.
Yes, I am recommending assessment by walking around and listening. Not interviewing, though you might want to do some of that, but actually observing the software being developed in the wild, and identifying theories about the why later. (If I had to boil down this whole like of assessment into one sentence, it would be: Look for inconsistencies. Look very hard. If they are obvious and clearly cause the organization pain, yet people seem to have some sort of bias to ignore them, that's a bad sign. Dealing with the inconsistencies is probably the place to start.)
Now Break It
The list above is partial; some things that, when missing, make it easy to fail at softtare development. They are not the only thing; I suspect technical competence should be on the list, along with something about keeping promises and taking responsibility. So the question is: Can we come up with other ways that a team could fail, while still accomplishing all of these goals at a reasonably high level? We then add those to the list, rinse and repeat.
Eventually, I'd like to come up with ten or less high-level bullet points, each of which can have some sub-bullet points and review it around the community. (Perhaps to include a rule of thumb scale.) Our goal is to give that to someone with some background in technology and some exposure to multiple methods and ideas, and it would give them something concrete to start making improvement suggestions from.
We may not change the world. But will you help me try? Specifically, I'm looking for ways to break the framework above, following up by ways to improve it to cover that weakness.
Best answer gets a favor. A $50 gift card, a book, co-author an article with me on the subject, it's up to you. But I'd like to move this forward if we can. I'm confident we can come up with something of value to the community. Exactly what that will look like? I'm not sure yet. One next step I'll have after designing the model is to analyze several high functioning organizations and seeing how they score - as a "reference implementation", something tragically left out of the CMM.
What do you think?