Tag: #softwaretesting

  • What should be reported from your Testing Efforts – Part 3

    This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

    We first mentioned this question a few weeks ago. Two weeks ago we said we would make some suggestions as to what could be recorded. The following measurements are gathered by almost every project we meet.

    • Defects raised ranked by severity and priority.
    • Testcases completed, in progress, blocked, failed.
    • Number of times a testcase has been executed.
    • First time failures.
    • etc etc etc

    Almost all test management tools will supply all these measurements and many more besides. Sometimes the question is which ones to select. Just make sure that you are getting the measurements for your project and your time period (otherwise the figures are misleading).

    Metrics (combinations of two measurements usually by dividing one measurement by another) are also provided by almost any test tool. As long as you avoid dividing by zero, these are also quite common. Some examples include:

    • Testcases executed per week.
    • Defects generated per week.
    • High severity defects as a percentage of all defects
    • etc, etc, etc

    Again the test management tool supplies these and other metrics and the only concern is to make sure the measurements are for your project and time period (and not someone else’s).

    The items that we find that are missed on the first time around are the trend measurements. Since there are no trends in the first week (a single data point is not a trend) and pretty useless ones in weeks 2 and 3 of any project the trends become an extra calculation in the third or fourth week. At that point, they may supply some unpleasant information like:

    • High Severity defects are increasing in both number and percentage of all defects.
    • Defect fix time is increasing rapidly as the project progresses.
    • Testcase execution has slowed to a crawl.
    • etc, etc etc

    Usually, the test manager has a feel for this and probably knows that the testing is not going well but the trend analysis brings it out without question.

    The only caveat is to make sure you are comparing the same items from week to week (otherwise you might as well throw it out).

  • What should be reported from your Testing Efforts – Part 2?

    This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

    We first mentioned this question a couple of weeks ago. The obvious and simple answer is to provide what your stakeholders want to have provided. The only issue with that statement is that we find a lot of stakeholders:

    1. Do not always know what they want at the beginning of a project.
    2. Change their minds as the project changes it risk profile (Stakeholders start taking more interest as the project risk increases and commensurately less interest as the project risk decreases)

    Since testers and Quality Assurance personnel are in the ‘Risk Reduction’ business we are in a good position to answer that question for them. As part of the anlysis of the project and the required testing, we will be judging the risk of the project and concentrating our testing on that. So we have a handle on the overall risk profile at the start. We can start with a level of reporting reflecting that risk profile.

    Since Software Testers and Quality Assurance personnel are also the ‘canary in mine’ when things start to go wrong, we can ramp up the reporting preemptively when things start to go wrong. For example, if there is a sudden surge in severe defects in a project, then we can start providing more information and recommendations.

    Most of the above seems obvious but when you are busy testing, the temptation is to leave the reporting at the same level (or even reduce it when we are busy). That is the exact opposite of what is needed.

    Next week – Some specifics for reporting.

  • May 2019 QA Events in the GTA and Beyond

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

     

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this May in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!




    THE NEED FOR CYBERSECURITY IN THE 21ST CENTURY

     May 28, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters: Jeremy Critch and Pranav Mehndiratta

    Wednesday, May 29, 2019 – Measuring Quality: Take Your Escaped Defects Count and Stuff It Speaker: James Spere