I really appreciate all the comments about training last week. They really got me thinking about the state of our industry, skills transfer, and the consultant's dilemma. Please allow me to explain.
Ostensibly, as a trainer, you have some ideas or experiences to convey. You want to have your students walk away, and, back at the office, actually /do/ test-driven-development, or performance testing, or some other technique. Or, at the very least, you want them to be better
now at what they were doing before. So you want the skills you are teaching to actually transfer
One term for this is "improving the client's condition." Now we could quibble about that; after all, only the client can improve things for themselves, and what does 'improve' mean anyway, but I think the basic vision of desiring your students do something different and better is a pretty good start. And, yes, there are pre-requisites; if the training isn't interesting enough, or is too condescending, you'll never even get a chance to help anyone out. They'll be turn off before they start.
Which leads to what I believe is the key question: Does the training actually transfer?
How do you know? Well, the trainer could have a practical exercise at the end, but another way is by actually visiting the company six months later and ask how they do work now. The alternative, often called "drive by training", involves handing out certificates and getting out of dodge quickly. Drive by training can be surprisingly lucrative, but it feels unsatisfactory to me. I hope you agree.
When the XP and Agile community read horror stories of "Fake Agile", "ScummerFall", and so on, and claim it is not true XP or Agile was not done 'right', we're talking about a failure in skills transfer
. Jerry Weinberg calls this the 'strawberry jam' principle; that the further you spread the training, the thinner it gets.
So, while you can reach more people with a book than a course, and more with a magazine article than a book, at each level you run more and more risk that your students will get it wrong and not even know it, or give up without even trying. For example, does anyone remember RUP? James Bach mentions that the RUP development team never followed up to see if people could actually understand or 'do' the process in a podcast here
So it we are going to talk about test training, our first real problem is this skills transfer. Our second, which I find just as challenging, is something I call the consultant's dilemma.
That is to say, the typical student in a test training class has a set of assumptions about what testing is, how it's done, and what it's takes to do it well. They may have a set of values; for example, that standardization is good, that a repeatable process is good, that automation will save money, and so on. In many cases, these deep-seated beliefs defy life experience
. When the team has four or five failures at test automation in a row, they will find reasons to find fault with the execution, not the idea.
So, as a trainer or consultant, you've got a problem. What the team (especially the management) wants to hear is how to make testing more stable, predictable, and repeatable. They want to hear how to automate the testing, to script-ify it, to make the individual humans dispensable, and so on.
It is very easy to sell training and get lots of high marks on the evaluation sheets by telling people exactly what they want to hear
- how to optimize testing by creating metrics, templates, process descriptions, and so on. Even if you aren't excited about the traditional methods of control, you can still help a team optimize a small piece within a given system.
The problem with telling people what they want to hear is that you aren't actually helping them improve their condition. But providing evidence, challenging the premise of people's deeply help convictions ... that is an emotional hot button. Do it wrong, and it's a great way to get not-so-great eval sheets and not invited back.
So any speaker, trainer, mentor, anybody who wants to influence change has to ask himself "If I kick in this direction, will I encourage change, or will I just kick over a bees nest? What is the right direction to push in?"
(I am speaking specifically of an invited trainer; inflicting help is something else entirely.)
The good news about the consultant's dilemma is that you can use it as a heuristic to evaluate any speaker. Just ask yourself "If I had two years of experience in testing - or none at all - would the advice of this guy boil down to agreeing with my intuitive ideals, or is he actually introducing something new?"
To sum up: It is relatively easy to get high marks on evaluation sheets by telling people what they want to hear, and, quite honestly, hard to do so by challenging them. Oh, there are rhetorical tricks. You can start from agreement, and talk about what you agree on, then lead up to a contradiction or surprise. You can win rapport with the audience by -politely- pointing out the things we all do because of ideology, but they never seem to actually work
I submit that if we want to advance as a community, and offer training with integrity, we'll need to do better at the skills transfer problem and the Consultant's dilemma. The reason I am excited about the Context-Driven Community
is because that is where I see these discussions actually happening
I have a few opinions on how to improve, and I can share those later. For the time being, what do you think? And what am I missing?
Update: One thing I'm missing is a technorati claim code, which is 2JMZ78N42HXA