Tag: Process Improvement

  • Register this week for the January TASSQ & KWSQA Events

    Register for TASSQ and KWSQA

       

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

    This image has an empty alt attribute; its file name is antenna-502680-unsplash-1024x683.jpg

    NVP Software Solutions will be participating in the following software testing and quality assurance event happening this January in Ontario, Canada. The events are located in Toronto and Waterloo in two weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!


    Photo by Antenna on Unsplash




    VERIFICATION, VALIDATION AND CERTIFICATION OF EMBEDDED SOFTWARE

     January 28, 2020 5:30 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenter:  Dr. Akramul Azim, Assistant Professor in software engineering at Ontario Tech University

      [box type=”shadow”][/box]  

    Quality Starts With a Shared Understanding

     January 29, 2020 11:30 a.m.   University of Waterloo

    Presenter:  Jeff Kosciejew

  • Quality Assurance and Artificial Intelligence

    In late December we heard two sides of the argument about Quality Assurance(QA) and Artificial Intelligence(AI).

    Side one from a Financial Executive: ‘AI will do away with the need for testing, testers, and QA’.

    Side two from a Quality Assurance Guru: ‘AI will mean more work than ever for QA.

    We tend to think it will be somewhere in the middle.

    1. AI will automate some tasks that are now done manually. We are already seeing this with some of the tools. This process has been ongoing for a long time.
    2. AI will change the way programming is done. As a recent article stated: Some of the programs were very simple; the data, however, was critical and every change in data lead to very different results.
    3. Following along with the point from above; AI wil allow us to do things in software that we could not have done before. This is already occurring.
    4. However, we will still have to test this to make sure we are using the right data and drawing the correct conclusions.

    QA is going to need to know how to understand, find, and load unbiased data.

    QA will need to understand and test how the conclusions are drawn, rather than what the actual results are at the time of test execution.

    This will impact everything we do from setup, to data provisioning, to execution and reporting. Test planning might escape the maelstorm but we would not bet on it.

    Happy 2020’s

    Give us a call to continue the discussion.

    Photo by Franki Chamaki on Unsplash

  • Quality Assurance 2020

    We received an interesting article last week from a friend about Technology Trends.

    The 10 Best Tech Products of the 2010s were listed. It was surprising how old some of them were. Some technology has been around a lot longer than was remembered!

    Of course, the 10 best had to be followed up with the worst failures. Only 5 of those, but they still resonate for the amount of money sunk into them and the scale of the failure in some cases.

    The next section was a prediction of the Five technologies for the next decade. A decade seems like a long time for some of these. We expect them to be mainstream within a year or two in some cases. Some are used already.

    They are:

    • Artificial Intelligence (AI)
    • Blockchain
    • Augmented Reality and Virtual Reality (AR/VR)
    • Big Data
    • Internet of Things (IoT)

    So what is the impact on Software Testing? Some of these will be a huge challenge for those who are used to dealing with static inputs and static results. Most of these technologies will not work on that principle. The Test Environment will have to change also. We have already had test environments change in the last few years to accomodate many external feeds and applications. Now the test environment will have to expand further in order to accomodate inputs we don’t use now and also the volumes of data needed for the testing.

    So what is the impact on Quality Assurance (Process Improvement)? There may be an even bigger impact on how we work. In the past, we could retrieve some mistakes by adding items late in the process. That will no longer be possible with some of the technologies. Failure to consider all items at the beginning of a project could have much larger costs later on. We cannot afford that type of omission in 2020 and beyond.

    Photo by Alex Kotliarskyi on Unsplash

  • Quality Assurance 2019 Review

    Quality Assurance is, as usual, up and down. Some organisations are buying in to Process Improvement, looking long term, and many are realizing that the solutions are not necessarily more testing or another test tool. They understand that the problems lie in how things are done rather than the tools used.

    However, many organizations are still adding test tools or adding test phases and layers without adding value to the Software or the Product.

    We completed several assessments over the past year and provided seminars aimed at process improvement. We are now up to the stage of implementing the process improvements identified. These organizations are moving ahead and planning for the long term. They can see the value of improvement.

    However, on the other hand, some of the people we are dealing with are struggling to complete the necessary testing without considering the long term improvement. They are challenged by the necessity of getting testing done for clients and do not think they have the time for process improvement.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real benefit of implementation of process improvement over multiple projects. This is the long term, high level, view of how process improvement benefits multiple projects.
    2. Define and gather the appropriate measurements and metrics to determine whether the process improvement is having the desired effect and also to identify further process improvement opportunities.
    3. Select and acquire the appropriate technology to support the process improvement effort. Talk to us about a well tested methodology for QA technology acquisition.

    Photo by Bekir Dönmez on Unsplash

  • Automated Testing

    Judging from what we have heard, AI seems to be the favourite flavour for Automated Testing in 2019. Many of the tools are stating that they are AI enabled or AI enhanced and claiming massive productivity gains as a result.

    We are still seeing tools that require a lot of technical knowledge and other ones that have hidden the actual scripting from the users.

    One other major change is that most tools are now automatically offering either a cloud based or an on-premise solution to suit every client’s wishes and, more importantly, their security needs.

    Tools go through a cycle every few years. For a while we get a different test tool for every possible situation, then someone comes and consolidates a large number of tools into one tool that addresses most situations. Then the cycle starts again. Obviously this cycle is driven by the technology used to build applications and what level of testing is needed.

    In addition to the technology that is used to build any particular product, the methodology also impacts the way automation is applied. Any iterative methodology has vastly different needs from a standard Waterfall methodology.

    What we are not seeing, is much initial analysis to select the best test tool for an organisation. We are still seeing selection based on features and not necessarily on functionality that is applicable throughout the organisation. We are seeing some ROI for individual automation efforts but that is after the tool is selected and implemented. However, as discussed last week, that calculation is not over the entire organisation or over an extended time period. This means we are losing out on some automation opportunities and completing some based on a false calculation.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real benefit of implementation of automation over multiple projects and years.
    2. Find an automation tool that suits your situation. There are many good ones around; you just need to find the appropriate one. Talk to us about a well tested methodology for test tool acquisition.

    Photo by Franck V. on Unsplash

  • Manual Testing

    This year is the n’th time we have heard about the demise of Manual testing and the n+1’st time it has not occurred.To paraphrase Mark Twain “Rumours of the death of Manual Testing have been greatly exaggerated”.

    Why does this keep coming up year after year:

    1. We keep inventing new items that are not amenable to being automated.
    2. New startups have neither the time or the budget to worry about automating testing. Their emphasis is on getting the product out the door and into the hands of their customers.
    3. Some organisations have a great deal invested in an old automated tool. They are not maintaining the existing scripts or adding new but no one is willing to throw them out.
    4. Some testing tools have not lived up to their promises and people are unwilling to try again with a new test tool.
    5. Most project managers do not have budget for automation of testing and since they do not benefit from it (the next project benefits) they see little reason to add it to their project.
    6. If it becomes a corporate or central responsibility to automate, then the question of funding it becomes awkward. Who is responsible for the cost fo the tool and the automation effort? How is that cost amortized and apportioned?
    7. It appears cheaper to get Manual Testers.

    So will this change in the 2020? It seems unlikely in view of the above unless we consider the following:

    1. Calculate the real cost of repeatedly executing the same testcases manually.
    2. Calculate the real benefit of implementation of automation over multiple projects and years.
    3. See whether the automation will pay for itself using the above two figures.
    4. Find an automation tool that suits your situation. There are many good ones around; you just need to find the appropriate one. Talk to us about a well tested methodology for test tool acquisition.

    Photo by Hunter Haley on Unsplash

  • Register this week for the November TASSQ Event

    Register for TASSQ

       

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider this event to network with other QA people or learn some of the new ideas in QA.

    This image has an empty alt attribute; its file name is antenna-502680-unsplash-1024x683.jpg

    NVP Software Solutions will be participating in the following software testing and quality assurance event happening this November in Ontario, Canada. The event is located in Toronto in two weeks. Check out the relevant website for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!


    Photo by Antenna on Unsplash




    CYBERSECURITY AND PRIVACY TRENDS AND ITS IMPACT TO THE CANADIAN MARKET

     November 26, 2019 6:00 p.m.  The Albany Club – 91 King Street East, Toronto, Ontario

    Presenter:  Carlos Chalico

  • RCA – Why – Part 2

    Three weeks ago we asked why we would bother with Root Cause Analysis. We asked some questions and provided some answers. However that analysis concentrated on a single defect and the use of Root Cause Analysis in that case. The power of Root Cause Analysis really applies when we can solve a whole class of defects.

    There is a good chance that a problem that occurs in one place in the code has been repeated elsewhere. Errors have a tendency to be repeated! This came up recently when we were asked to look into a fix that had been implemented 20 years ago. It seemed to be unravelling. This is what is sometimes called a latent defect (one out in production code that suddenly appears). Suffice to say it was the same fix throughout the code and that Root Cause Analysis at the time it originally occurred would have been very helpful. It would have stopped the expensive and time consuming fix required recently. Root cause analysis would have paid for itself multiple times over.

    So the next time you have an error, take a look at it and see if you can put something down in the defect description indicating where it might have originated. Some classification will go a long way to describing where we need to concentrate more resources.

    October is Quality Month. Sign up for our newsletter, to see some things about Quality Month or request a copy if you are too late for this month.

    Photo by JJ Ying on Unsplash