Tag: software testing

  • Automated Testing

    Judging from what we have heard, AI seems to be the favourite flavour for Automated Testing in 2019. Many of the tools are stating that they are AI enabled or AI enhanced and claiming massive productivity gains as a result.

    We are still seeing tools that require a lot of technical knowledge and other ones that have hidden the actual scripting from the users.

    One other major change is that most tools are now automatically offering either a cloud based or an on-premise solution to suit every client’s wishes and, more importantly, their security needs.

    Tools go through a cycle every few years. For a while we get a different test tool for every possible situation, then someone comes and consolidates a large number of tools into one tool that addresses most situations. Then the cycle starts again. Obviously this cycle is driven by the technology used to build applications and what level of testing is needed.

    In addition to the technology that is used to build any particular product, the methodology also impacts the way automation is applied. Any iterative methodology has vastly different needs from a standard Waterfall methodology.

    What we are not seeing, is much initial analysis to select the best test tool for an organisation. We are still seeing selection based on features and not necessarily on functionality that is applicable throughout the organisation. We are seeing some ROI for individual automation efforts but that is after the tool is selected and implemented. However, as discussed last week, that calculation is not over the entire organisation or over an extended time period. This means we are losing out on some automation opportunities and completing some based on a false calculation.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real benefit of implementation of automation over multiple projects and years.
    2. Find an automation tool that suits your situation. There are many good ones around; you just need to find the appropriate one. Talk to us about a well tested methodology for test tool acquisition.

    Photo by Franck V. on Unsplash

  • Manual Testing

    This year is the n’th time we have heard about the demise of Manual testing and the n+1’st time it has not occurred.To paraphrase Mark Twain “Rumours of the death of Manual Testing have been greatly exaggerated”.

    Why does this keep coming up year after year:

    1. We keep inventing new items that are not amenable to being automated.
    2. New startups have neither the time or the budget to worry about automating testing. Their emphasis is on getting the product out the door and into the hands of their customers.
    3. Some organisations have a great deal invested in an old automated tool. They are not maintaining the existing scripts or adding new but no one is willing to throw them out.
    4. Some testing tools have not lived up to their promises and people are unwilling to try again with a new test tool.
    5. Most project managers do not have budget for automation of testing and since they do not benefit from it (the next project benefits) they see little reason to add it to their project.
    6. If it becomes a corporate or central responsibility to automate, then the question of funding it becomes awkward. Who is responsible for the cost fo the tool and the automation effort? How is that cost amortized and apportioned?
    7. It appears cheaper to get Manual Testers.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real cost of repeatedly executing the same testcases manually.
    2. Calculate the real benefit of implementation of automation over multiple projects and years.
    3. See whether the automation will pay for itself using the above two figures.
    4. Find an automation tool that suits your situation. There are many good ones around; you just need to find the appropriate one. Talk to us about a well tested methodology for test tool acquisition.

    Photo by Hunter Haley on Unsplash

  • RCA – Why – Part 2

    Three weeks ago we asked why we would bother with Root Cause Analysis. We asked some questions and provided some answers. However that analysis concentrated on a single defect and the use of Root Cause Analysis in that case. The power of Root Cause Analysis really applies when we can solve a whole class of defects.

    There is a good chance that a problem that occurs in one place in the code has been repeated elsewhere. Errors have a tendency to be repeated! This came up recently when we were asked to look into a fix that had been implemented 20 years ago. It seemed to be unravelling. This is what is sometimes called a latent defect (one out in production code that suddenly appears). Suffice to say it was the same fix throughout the code and that Root Cause Analysis at the time it originally occurred would have been very helpful. It would have stopped the expensive and time consuming fix required recently. Root cause analysis would have paid for itself multiple times over.

    So the next time you have an error, take a look at it and see if you can put something down in the defect description indicating where it might have originated. Some classification will go a long way to describing where we need to concentrate more resources.

    October is Quality Month. Sign up for our newsletter, to see some things about Quality Month or request a copy if you are too late for this month.

    Photo by JJ Ying on Unsplash

  • Register this week for the October events at TASSQ and KWSQA

    Last chance to register for TASSQ and KWSQA

       

    NVP is at StarCanada Wednesday and Thursday this week. Stop at our booth for your free stress relieving bug.

       

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

    This image has an empty alt attribute; its file name is antenna-502680-unsplash-1024x683.jpg

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this September in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!


    Photo by Antenna on Unsplash




    THE POWER OF DESIGN SPRINTS FOR PRODUCT TEAM

     October 29, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters:  Leah Oliveira and Carlos Oliveira

    Reality Driven Testing

     October 30, 2019 11:30 a.m.   University of Waterloo

    Presenters:  Rob Sabourin 

  • October 2019 Software QA Events in the GTA and beyond

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

    This image has an empty alt attribute; its file name is antenna-502680-unsplash-1024x683.jpg

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this September in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!


    Photo by Antenna on Unsplash




    THE POWER OF DESIGN SPRINTS FOR PRODUCT TEAM

     October 29, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenters:  Leah Oliveira and Carlos Oliveira

    Reality Driven Testing

     October 30, 2019 11:30 a.m.   University of Waterloo

    Presenters:  Rob Sabourin 

  • What should be reported from your Testing Efforts – Part 1?

    This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

    However, even when it is well planned, there is often a substantial difference in what people expect. Some of your stakeholders will want every detail of every test result, defect, and metric that you can generate. They will look and evaluate and ask questions. Others will want to concentrate on the highlights and only investigate when there is an obvious problem that is impacting the project. The last category might be summed up by a comment I heard at one Project Meeting addressed to the test manager and coming from the biggest stakeholder: “As long as you say it is okay; I don’t want to hear anymore”. We removed the actual names to protect the guilty!

    While we will come back to this question in a couple of weeks and welcome your input in the interim, one obvious comment comes to mind immediately. Test tools have had a huge impact on this aspect of testing. The ability to record almost everything, drill down, add comments, set statuses, and move items from user to user has facilitated reporting.

    In an interesting story from one client a long time ago, the layers of management made the whole process spin out over two weeks from the time the test was actually done to the time the report was consolidated for the last layer of maangement. We have moved a long way from that!

  • Does your software Testing Pay – Part 4

    “Does your Software Testing Pay” is the question we posted six weeks ago. We posted a simple calculation for a ROI and then a few comments that were received as feedback. Those comments were appreciated since they extended the idea past the direct defect costing calculation. Today we want to wrap up this series with some general comments that extend the idea a little farther.

    For the most part we have concentrated on the Software Testing aspect of Quality Assurance. Almost everyone we speak with is familiar with this aspect. Most software companies or those who depend on software complete some form of testing and the concept of testing can be generalised to cover a multitude of activities. However, it is not the entire picture.

    So, at the risk of making the title slightly inaccurate, what other aspects of Quality Assurance can be considered:

    1. Process improvement for the entire SDLC.
    2. Timely intervention at the Root Cause of many of the defects to prevent them from occuring in the first place.
    3. Concentration on the required end-result to make sure everyone is working towards that end. It is suprising how often this is obscured in large organisations. We attended a course a couple of weeks ago and spoke to the instructor afterwards and apparently many people on the course have only been told a very small piece of what they need to do (and part of that was to attend the course although the reason why was not provided). Not only is that against the concepts of keeping people informed of why they are doing something but it is also very de-motivating.

    The above list (particularly number 1) can cover a lot items that are very detailed and can add a lot of value to the Quality Assurance efforts. We may take this up in the fall.

  • Does your software Testing Pay – Part 3

    “Does your Software Testing Pay” is the question we posted four weeks ago. We then posted a simple calculation for a ROI two weeks ago.
    After we posted the second blog we had a couple of comments to the effect that concentrating on defects was a very narrow focus in terms of cost recovery and benefits. This was certainly a valid criticism; we had concentrated on something that could be answered and calculated without too much effort.

    Some of the suggestions that came back were as follows:

    1. Contributing to a better product using the information gleaned from testing
    2. Enhanced knowledge of the product for both testers and developers.
    3. Future design improvements.
    4. A better quality product.

    No doubt more items could be added to the above list

    Now we just need to cost them

    Since some of these are subjective benefits, it is suggested that they be documented and then all stakeholders can assign them to a value bucket (independently). As a starting point we can use an averaged value for each benefit and then convert that into a dollar value to determine the benefit.