What should be reported from your Testing Efforts – Part 3

This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

We first mentioned this question a few weeks ago. Two weeks ago we said we would make some suggestions as to what could be recorded. The following measurements are gathered by almost every project we meet.

  • Defects raised ranked by severity and priority.
  • Testcases completed, in progress, blocked, failed.
  • Number of times a testcase has been executed.
  • First time failures.
  • etc etc etc

Almost all test management tools will supply all these measurements and many more besides. Sometimes the question is which ones to select. Just make sure that you are getting the measurements for your project and your time period (otherwise the figures are misleading).

Metrics (combinations of two measurements usually by dividing one measurement by another) are also provided by almost any test tool. As long as you avoid dividing by zero, these are also quite common. Some examples include:

  • Testcases executed per week.
  • Defects generated per week.
  • High severity defects as a percentage of all defects
  • etc, etc, etc

Again the test management tool supplies these and other metrics and the only concern is to make sure the measurements are for your project and time period (and not someone else’s).

The items that we find that are missed on the first time around are the trend measurements. Since there are no trends in the first week (a single data point is not a trend) and pretty useless ones in weeks 2 and 3 of any project the trends become an extra calculation in the third or fourth week. At that point, they may supply some unpleasant information like:

  • High Severity defects are increasing in both number and percentage of all defects.
  • Defect fix time is increasing rapidly as the project progresses.
  • Testcase execution has slowed to a crawl.
  • etc, etc etc

Usually, the test manager has a feel for this and probably knows that the testing is not going well but the trend analysis brings it out without question.

The only caveat is to make sure you are comparing the same items from week to week (otherwise you might as well throw it out).

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.