Tag: #softwaretesting

  • What should be reported from your Testing Efforts – Part 4

    Feedback is always welcome and this week we are indebted to Paul Seaman for some very valid comments on the What should be reported from your testing – Part 2. He pointed out that the blog seemed to have a written bias, missed the value of verbal communication and seemed to silo testers.

    We were considering mainly final reports using an independent test team and with the potential need for an audit of the testing. So some bias may have crept in and Paul pointed that out. Let’s look at each of these.

    Siloed Test Team: The more you silo a test team, the less effective they are. Information does not flow to them or from them. So the opportunities to learn from each other is lost. In addition, any information that does flow back and forth will likely be somewhat distorted by the time it gets to the other party. Testers need to be included throughout the lifecycle embedded in the team.

    Verbal Communication: This is an obvious follow on from the previous point. If you are remote, or restricted in communication then the chance to provide and receive feedback is reduced. Non-verbal communication tends to be asynchronous – something is sent and there is a delay before there is a response. Verbal (throughout the project or testing effort) allows instant feedback and may speed up responses and reactions to changing events. The only thing you may lose is an audit trail. Anything crucial needs to be noted and put into a decision log for retention.

    Written bias: This comes down to the same thing as the last comment in the previous paragraph. Crucial information that needs to be retained should be documented and stored. If the report is simply a status that is being provided then it may not need to be fully documented. Point taken.

  • June 2019 QA Events in the GTA and Beyond

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

     

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this June in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!




    IS DIGITAL TRANSFORMATION A BUZZ WORD OR IS IT A BUSINESS EVOLUTION?

     June 25, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters: Kyle Hulme

    See KWSQA.org for details

  • What should be reported from your Testing Efforts – Part 3

    This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

    We first mentioned this question a few weeks ago. Two weeks ago we said we would make some suggestions as to what could be recorded. The following measurements are gathered by almost every project we meet.

    • Defects raised ranked by severity and priority.
    • Testcases completed, in progress, blocked, failed.
    • Number of times a testcase has been executed.
    • First time failures.
    • etc etc etc

    Almost all test management tools will supply all these measurements and many more besides. Sometimes the question is which ones to select. Just make sure that you are getting the measurements for your project and your time period (otherwise the figures are misleading).

    Metrics (combinations of two measurements usually by dividing one measurement by another) are also provided by almost any test tool. As long as you avoid dividing by zero, these are also quite common. Some examples include:

    • Testcases executed per week.
    • Defects generated per week.
    • High severity defects as a percentage of all defects
    • etc, etc, etc

    Again the test management tool supplies these and other metrics and the only concern is to make sure the measurements are for your project and time period (and not someone else’s).

    The items that we find that are missed on the first time around are the trend measurements. Since there are no trends in the first week (a single data point is not a trend) and pretty useless ones in weeks 2 and 3 of any project the trends become an extra calculation in the third or fourth week. At that point, they may supply some unpleasant information like:

    • High Severity defects are increasing in both number and percentage of all defects.
    • Defect fix time is increasing rapidly as the project progresses.
    • Testcase execution has slowed to a crawl.
    • etc, etc etc

    Usually, the test manager has a feel for this and probably knows that the testing is not going well but the trend analysis brings it out without question.

    The only caveat is to make sure you are comparing the same items from week to week (otherwise you might as well throw it out).

  • What should be reported from your Testing Efforts – Part 2?

    This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

    We first mentioned this question a couple of weeks ago. The obvious and simple answer is to provide what your stakeholders want to have provided. The only issue with that statement is that we find a lot of stakeholders:

    1. Do not always know what they want at the beginning of a project.
    2. Change their minds as the project changes it risk profile (Stakeholders start taking more interest as the project risk increases and commensurately less interest as the project risk decreases)

    Since testers and Quality Assurance personnel are in the ‘Risk Reduction’ business we are in a good position to answer that question for them. As part of the anlysis of the project and the required testing, we will be judging the risk of the project and concentrating our testing on that. So we have a handle on the overall risk profile at the start. We can start with a level of reporting reflecting that risk profile.

    Since Software Testers and Quality Assurance personnel are also the ‘canary in mine’ when things start to go wrong, we can ramp up the reporting preemptively when things start to go wrong. For example, if there is a sudden surge in severe defects in a project, then we can start providing more information and recommendations.

    Most of the above seems obvious but when you are busy testing, the temptation is to leave the reporting at the same level (or even reduce it when we are busy). That is the exact opposite of what is needed.

    Next week – Some specifics for reporting.

  • May 2019 QA Events in the GTA and Beyond

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

     

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this May in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!




    THE NEED FOR CYBERSECURITY IN THE 21ST CENTURY

     May 28, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters: Jeremy Critch and Pranav Mehndiratta

    Wednesday, May 29, 2019 – Measuring Quality: Take Your Escaped Defects Count and Stuff It Speaker: James Spere