My favorite book is Digital Fortress by Dan Brown. Without ruining the book for those that have not read it there is a great quote in the book. The phrase “Quis custodiet ipsos custodes” is Latin for “Who guards the guards” I believe this is from the Roman poet Juvenal. The interesting point here is on an Agile team when the developers build and run unit tests, or even integration tests, who checks these? Do you have someone with the knowledge to review the testing? Here is something that happened to me on a consulting job.

The other day I was arguing with my product owner over how a test case was able to allow for 5 people on the team to misunderstand what passing was for the software. Basically it went something like the following.

The PO and I had a meeting 2 weeks ago to discuss acceptance criteria. We worked together to create AC for each story. Then we involved the one development team member that was available. After much discussion we all agreed that the AC was ready. Next we added the story to the Sprint backlog for the next Sprint. Then the team started working on the story, designing it, creating tasks, etc. As the QA Analyst on the team I have the most experience in creating test cases so that is the task I took on. Next I wrote some test scenarios (environment conditions that affect the outcome of a test). Then I had a developer on the team review the test case to ensure I was not missing anything and add anything that I had not thought of previously. Once the code was available I ran the test case. Based on the pass/fail criteria I passed the test.

Then the Product Owner came by my desk and asked a very good question. He asked “…so what did you do to test that part of the code…” I explained that I used an internal tool to input data, and then I checked the output. He seemed very perturbed at this point. He said “wait you did what to enter the data?” I replied by showing exactly what I had done. Then he questioned if I understood that the tool was not connected to the part of the code that was built. At this point I wanted to say “that’s out of scope for me” but I told him that this test cases was reviewed by other team members and I was instructed on how to execute the test case by the developer that worked on it. BINGO! Here is the issue. The developer that built the tool was only looking at the pass/fail criteria and in an instant had thought that his tool would accomplish the same thing. Only we were not trying to prove that the data was in the database, we were trying to test if the new code would put it there. So, we bi-passed the fact that the new code was to be tested and went straight for the end result.

After finishing the mini-retrospective on this blunder, I recalled something from another team I was working on. After months of failures our team finally concluded that by practicing test-driven development, we should have the developers on the team be the first people to execute test cases. And the fact that testers and QA Analysts are experts in the area of testing these tests should be overseen by the testers and analysts. So we used this process:

  1. Requirements documents are created by product owners
  2. Business Analysts turn the requirements document into epics and stories
  3. The BA, PO, QA, and Dev Manager write the value statements and acceptance criteria for the epics and stories
  4. The team asks questions about the AC and gets the complete understanding of the stories
  5. The team then works out what tasks need to be done and what test cases should be written
  6. The team then starts building the test cases and researches the design of the system
  7. The team’s QA and Testers should have the Test Cases written first with the objective of the test case, the prerequisites they need, and the Go Right and Go Wrong paths through the software.
  8. Then the team member(s) that are working on the tasks to complete a story writes the test case design steps.
    (This is key, because if you are practicing test driven development (TDD) then the developers of the stories will be the first ones to test the application. Also you would not want the developers blocked or dictated by testers that are coming up with how to execute a test case.)
  9. Once the test cases design steps are built, then these are given to QA to review
  10. The code is written for the software piece
  11. The developers run the unit, component integration, and system test cases
  12. If all is working then this piece is passed to testers for validation
  13. The testing is done and the test case is set to pass.
  14. Now the story is ‘done done’.

The third ‘done’ to me is that the PO has approved the story and it is ok to go to the end user or customer.

In the past, teams that I have worked with have found that by following this outline the need for a testing cycle is removed. This is because the exploratory testing is done during a Sprint and on planning days, etc. Also the load, stress, and performance testing is done using virtual machines, and automated test cases. Microsoft’s PowerShell and Virtual Server 2008 Hyper-V are great for achieving this goal. Also the usability, look and feel can be done during the review meeting where the stakeholders, PO, and customers if possible are present to give feedback on these types of testing. Whitebox testing is done once maturity reaches a medium level.

After six months of analysis and discussion, we discovered that the part of the process that we were missing was to increase the feedback loop, and decrease waste. These are two very fundamental concepts of lean development. The idea here is that the feedback loop is shortened because the developer runs the tests first. Historically developers run the unit and component integration tests, then the system level and UAT tests are run by subject matter experts, customers, or testers. Now with the effort to reduce waste and shorten the feedback, loop testing has been moved up front to be done earlier in the process. The earliest is not right after the code has been written, the earliest would be during requirements gathering.


About the Author

Bob Small Bob Small founder of the Quality Consortium of Phoenix has 14 years in the IT industry. Bob has been a developer for a Professional Senior care provider. Bob started as a System Tester for the number one domain registrar in the world. Bob continued his career in testing and advanced into Quality Assurance at a leading contact center solution provider. Bob is a Lead QA Engineer at a local Hardware and software company. Bob is a guest lecturing at local Universities and colleges. Bob has won worldwide online testing contests. He continues to learn Agile techniques and mentors those around him in testing techniques and methods. He has taught developers, and mentored junior QA analysts in testing methodologies. Favorite quote is: “Plan your work, work your plan.”