Tag: QA

  • Automated Testing

    Judging from what we have heard, AI seems to be the favourite flavour for Automated Testing in 2019. Many of the tools are stating that they are AI enabled or AI enhanced and claiming massive productivity gains as a result.

    We are still seeing tools that require a lot of technical knowledge and other ones that have hidden the actual scripting from the users.

    One other major change is that most tools are now automatically offering either a cloud based or an on-premise solution to suit every client’s wishes and, more importantly, their security needs.

    Tools go through a cycle every few years. For a while we get a different test tool for every possible situation, then someone comes and consolidates a large number of tools into one tool that addresses most situations. Then the cycle starts again. Obviously this cycle is driven by the technology used to build applications and what level of testing is needed.

    In addition to the technology that is used to build any particular product, the methodology also impacts the way automation is applied. Any iterative methodology has vastly different needs from a standard Waterfall methodology.

    What we are not seeing, is much initial analysis to select the best test tool for an organisation. We are still seeing selection based on features and not necessarily on functionality that is applicable throughout the organisation. We are seeing some ROI for individual automation efforts but that is after the tool is selected and implemented. However, as discussed last week, that calculation is not over the entire organisation or over an extended time period. This means we are losing out on some automation opportunities and completing some based on a false calculation.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real benefit of implementation of automation over multiple projects and years.
    2. Find an automation tool that suits your situation. There are many good ones around; you just need to find the appropriate one. Talk to us about a well tested methodology for test tool acquisition.

    Photo by Franck V. on Unsplash

  • Manual Testing

    This year is the n’th time we have heard about the demise of Manual testing and the n+1’st time it has not occurred.To paraphrase Mark Twain “Rumours of the death of Manual Testing have been greatly exaggerated”.

    Why does this keep coming up year after year:

    1. We keep inventing new items that are not amenable to being automated.
    2. New startups have neither the time or the budget to worry about automating testing. Their emphasis is on getting the product out the door and into the hands of their customers.
    3. Some organisations have a great deal invested in an old automated tool. They are not maintaining the existing scripts or adding new but no one is willing to throw them out.
    4. Some testing tools have not lived up to their promises and people are unwilling to try again with a new test tool.
    5. Most project managers do not have budget for automation of testing and since they do not benefit from it (the next project benefits) they see little reason to add it to their project.
    6. If it becomes a corporate or central responsibility to automate, then the question of funding it becomes awkward. Who is responsible for the cost fo the tool and the automation effort? How is that cost amortized and apportioned?
    7. It appears cheaper to get Manual Testers.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real cost of repeatedly executing the same testcases manually.
    2. Calculate the real benefit of implementation of automation over multiple projects and years.
    3. See whether the automation will pay for itself using the above two figures.
    4. Find an automation tool that suits your situation. There are many good ones around; you just need to find the appropriate one. Talk to us about a well tested methodology for test tool acquisition.

    Photo by Hunter Haley on Unsplash

  • Defect Management Process reduces defects – Part 2

    Two weeks ago, we discussed how Root Cause analysis of a defect may identify a process issue, the need to fix the process and how that may not be specific to any one project. In particular a process that is common to many projects can cause the same issue to recur in multiple places. The manifestations may be slightly different in each case but the Root Cause is the same.

    Assuming that Root Cause is a process problem (and many, many problems that are repeated in multiple projects are process problems) the following steps need to be done:

    1. Carry out an impact analysis – how many projects/systems/people have been affected by this problem in the past year?
    2. How many projects/systems/people are going to be impacted by this problem in the coming year?
    3. Estimate the costs incurred in the past year due to this problem and estiamte the expected costs in the coming year.

    We now know how much it has cost and is projected to cost over a two year span. We need to implement a process improvement process and that will cost money. Estimate the costs to change the process and as long as these are less than the projected costs for the next year, we have a positive ROI and can proceed.

    1. Plan to change the process in a way that we hope will fix the problem. Note the cost of the planning exercise.
    2. Make the change in the process. Again – note the cost to make the change.
    3. Measure whether the change has removed the defect or at least reduced its occurrence. Process improvement will not always eliminate a defect entirely, it may reduce the number of times it occurs or reduce its severity.
    4. If there is no improvement or the situation is worse, plan to back out the change.
    5. If there is improvement but more could be done, then plan the next improvment and go through the process again.

    The above is just PDCA for Process Improvment.

    Photo by New Data Services on Unsplash

  • RCA – Why – Part 2

    Three weeks ago we asked why we would bother with Root Cause Analysis. We asked some questions and provided some answers. However that analysis concentrated on a single defect and the use of Root Cause Analysis in that case. The power of Root Cause Analysis really applies when we can solve a whole class of defects.

    There is a good chance that a problem that occurs in one place in the code has been repeated elsewhere. Errors have a tendency to be repeated! This came up recently when we were asked to look into a fix that had been implemented 20 years ago. It seemed to be unravelling. This is what is sometimes called a latent defect (one out in production code that suddenly appears). Suffice to say it was the same fix throughout the code and that Root Cause Analysis at the time it originally occurred would have been very helpful. It would have stopped the expensive and time consuming fix required recently. Root cause analysis would have paid for itself multiple times over.

    So the next time you have an error, take a look at it and see if you can put something down in the defect description indicating where it might have originated. Some classification will go a long way to describing where we need to concentrate more resources.

    October is Quality Month. Sign up for our newsletter, to see some things about Quality Month or request a copy if you are too late for this month.

    Photo by JJ Ying on Unsplash

  • Register this week for the October events at TASSQ and KWSQA

    Last chance to register for TASSQ and KWSQA

       

    NVP is at StarCanada Wednesday and Thursday this week. Stop at our booth for your free stress relieving bug.

       

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

    This image has an empty alt attribute; its file name is antenna-502680-unsplash-1024x683.jpg

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this September in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!


    Photo by Antenna on Unsplash




    THE POWER OF DESIGN SPRINTS FOR PRODUCT TEAM

     October 29, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters:  Leah Oliveira and Carlos Oliveira

    Reality Driven Testing

     October 30, 2019 11:30 a.m.   University of Waterloo

    Presenters:  Rob Sabourin 

  • October 2019 Software QA Events in the GTA and beyond

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

    This image has an empty alt attribute; its file name is antenna-502680-unsplash-1024x683.jpg

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this September in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!


    Photo by Antenna on Unsplash




    THE POWER OF DESIGN SPRINTS FOR PRODUCT TEAM

     October 29, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters:  Leah Oliveira and Carlos Oliveira

    Reality Driven Testing

     October 30, 2019 11:30 a.m.   University of Waterloo

    Presenters:  Rob Sabourin 

  • RCA – Why – Part 1

    Why bother with Root Cause Analysis on Defects? We have enough to do without adding to the work!

    1. We found a defect.
    2. We fixed the defect.
    3. We retested the defect.
    4. We closed the defect (pending success on item 3).
    5. Enough said. Let’s move on to something else!

    We acknowledge that once we have a defect, all of the above have to occur (hopefully only once and not repeatedly). But Root Cause Analysis is intended to prevent 1 – 5 ever coming back to bother us again. The key word is ever! We never want to see that defect again. In order to do that, we need to find out where it occurred, why it occurred; and make sure that combination never comes up again. In the long term this saves us a lot of money and time and allows us to concentrate on more important items.

    October is Quality Month. Sign up for our newsletter, to see some things about Quality Month or request a copy if you are too late for this month.

    Photo by JJ Ying on Unsplash

  • Trend analysis on Defects – Part 1

    One of the items that is missed frequently in most organisations is Trend Analysis on Defects. This is the trend over time and over multiple projects. We are looking for whether things are improving or not as the case may be, over time. There are a number of prerequisites which we will discuss today prior to returning to this topic in a couple of weeks with further information.

    Prerequisites

    1. A consistent method of classifying defects with regard to Priority and Severity. I include definitions and standards for Priority and Severity either in the Test Strategy for an organisation or in every test plan for every project. Note that without this in place; it is impossible to do any trend analysis.
    2. A commitment to fairly recording defects in every phase of the project no matter what methodology you use. Otherwise it is possible to make certain phases look better than others by ignoring or failing to record the defects.
    3. A commitment from management to the years it will take to obtain sufficient statistics for trends to be meaningful.
    4. A commitment to and an understanding of the methodology used to reconcile projects that of different size and that are built in different ways. A very large project built in-house cannot be directly compared to a small project where the software was purchased.
    5. An understanding of the statistical processes that will be used to generate the trends. It is far too easy to draw the wrong conclusion
    If any of these prerequisites are not in place before anything starts, you are embarking on an expensive failure!

    Photo by Stephan Henning on Unsplash