I find the question "what is new and exciting in software?" to be a fascinating one.
On the one hand, we've got a line that goes back to the wisdom of Solomon "There is nothing new under the sun." I mean, if we wanted to, we could find software wisdom in the work of Francis Bacon
, or Galileo
, or Eratosthenes
On the other hand, we do have new things. Fifty years ago most programs were batch; then they went interactive, then windowed, then browser-based, then javacript-y, then Ajax. Today, we're dealing with servers communicating with a host of mobile devices, each of whom may have different standards and operability. Plus we have touch devices, devices that are combined with telephony equipment, simple messaging ... and more.
Today's modern apps don't just need to email; they need to Woof
. So yes, I think we have some room to grow and learn.
Meanwhile, in the test-o-sphere, two more interesting things are happening - Chris McMahon is hosting his second Workshop on Writing about Software Testing
, with a theme of "The Frontiers of Software Testing."
Also lat week on Twitter Jonathan Bach began a rousing conversation
about the meaning of quality. It's hard to do a summary of a twitter conversation, but I hear Jon saying that since quality means different things to different people, when we throw it around as if it everyone agrees on our meaning, we aren't doing anyone any favors. (Even if we say "quality is value to some person", the Weinberg Definition, and we all agree on it, that still leaves the possibility that you care more about style and I care more about mileage, thus the "value" of a honda accord is different for the two of us.)
I know that as an industry, we've made attempts to agree on what quality is. Some might say we have reached the point of diminishing returns in the discussion. Lots of people think they know what "it" is, and may belong to one of several different camps.
Still, I think we have room to grow there. It's a frontier.
And that reminds me of a lot of things that we haven't defined very well - terms we use as if everyone agrees, but when it comes right down to it, we do not.
Here are a few things that come to mind:
(1) If a facilitator interviewed the do-ers, middle managers, and senior executives of your organization, asking "what is the role of the test group?" and "what risks do they address?" would that person come back with one consistent, concrete answer?
(2) Likewise, if he asked the same question about estimates - how they are derived and how solid are they - same answer or something different?
(3) How about the "role" of the test lead, the supervisor, the manager? Are they player-coaches? Pure coaches?
(4) How about Methodology? Is it viewed as a set of tradeoffs
? Can your organization express those tradeoffs symbolically - or even in any terms of abstract logic, even prose?
(5) What are the tradeoffs involved in physically distributing the staff -- both having teams in different locations, and in work-from-home teams?
(6) How can we decide if a change is "working" or not?
(7) How can we better communication and make decisions about risk/rewards, especially around "Are we done testing yet?"
These are just a few of the "frontiers" I am interested in software development and test -- notice how they tend to be less engineering geekery -- I'm much less interested in the hot new successor programming language to Ruby -- and more about how to deal with people and project issues.
I think we've got a long way to go, and plenty of room for opinions.
I've always struggled with a term goes undefined, like "Architecture." It bugs me. It strikes me as sloppy thinking. But that's how the sciences advance in any discipline, right?
I mean, Aristotle had an idea about gravity that was dead wrong that stood until Galieo tested it. It wasn't until Newton came along that we actually understood the dynamics of the solar system in symbolic terms, and could begin to predict the orbits of objects in the sky. And we still have Einstein and String theory and better and better refinements of those ideas.
We've come a long way in software. Fifteen years ago, the majority of the literature actually believed in Big Design Up Front and the "complete, correct, consistent" specification -- look hard enough and you can still find those people.
Today we're still struggling with dozens of ideas about the "right" way to do testing, from customer-defined tests to the role of the independent tester, to Acceptance Test Driven Development and the idea that all the tests for a smaller piece of work can (or should) be defined up front. Everyone someone tries one of these ideas, they are conducting an experiment.
On Saturday a copy of Lyssa Adkin's new book on Coaching Agile Teams
came for me in the mail. It's a great little book - it takes one of those roles that everyone agrees should exist, but everyone might have a different definition for, says "this is what I think the Agile Coach does - why they do it - and the results you'll get if you do things this way." The book is helping to fill in our body of knowledge.
We need books like this for testing, but to get there first we need to conduct some experiments, get some results, and build some agreement. We need to find the current frontiers, plant and grow, and create settlements.
At least, I think that's something we can to do improve the state of software development.
I've listed some frontiers. If you know me or my projects, I hope you agree that I'm working on them actively, and I'm getting somewhere.
What frontiers of software testing (or greater, software projects) are you interested in? Just as importantly, what experiments are you running?
I don't claim to have all the answers. But together ... I suspect we might get somewhere.