Tag: software testing

  • Does your Software Testing Pay?

    “Does your Software Testing Pay” is another question (in the series we are addressing here) that comes up quite frequently. Although it may not be stated directly it can manifest itself in the following actions:
    1. Lack of resources for testing.
    2. Lack of time provided for testing.
    3. Lack of suitable hardware on which to conduct testing
    4. Lack of software testing tools.
    5. …
    This is not a comprehensive list but it covers many of the major manifestations of not having an answer as to whether the testing pays. If the question cannot be answered, then it is difficult to justify expenses or investment in testing. In addition, some Quality Assurance people go to great lengths to avoid answering this question.
    Come back in two weeks to see one way of addressing this question. It will require research, metrics, and statistics.

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • Are you satisfied with your testing – Part 2

    You may recall from the blog of two weeks ago that may organisations end up dissatisfied with their testing.

    The key to resolving this, from Quality Assurance, is to plan your testing before your start. Decide on what must be done, what should be done and what need not be done before the project gets very far.

    Sometimes people call these decisions ‘tradeoffs‘ since they imply that something is being traded off against something else and someone is losing out. Tradeoffs are different and do have the characteristics mentioned in the previous sentence. Here we are planning for what needs to be done.

    Other people claim they need to see the software in order to know what to test. At the detail level this can be true but it is not true at the upper levels.

    Still others will claim that they will think of all the testing that needs to occur while they are doing it. This is not a bad method as long as the tester fully understands all the business, technical, and software requirements and can handle all of this. Small projects with little risk can be done this way. Larger projects with higher risks are not so easy.

    A Quality Assurance process considers all the relevant items at the start and does not wait for a crisis to occur or for management to worry about what has been completed or not completed. It is determined at the beginning and the decisions taken at that point, not in the last 10% of the project with a huge amount of the work to complete. Ongoing reporting and process improvement ensures this works properly.

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • Are you satisfied with your testing?

    Following on from the theme of last month, many organisations are dissatisfied with their testing. They feel it is incomplete, or ineffective or costs too much. At the end of the testing effort they are left with a vague feeling of unease. Often it is difficult for people to quantify their concerns but they are very real and lead to delays and ongoing expensive testing in an effort to remove this feeling.

    The trouble is that the more they do in testing, the more they may realise what they have not done. This does not increase the confidence level! Furthermore, if they do find problems during this ‘extra’ testing effort the level of confidence drops commensurately and even more testing is required until there is no energy, budget or time left.

    Come back on February 25 to see how a Quality Assurance process can address these concerns long before they become issues.

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • January 2019 QA Events in the GTA and Beyond

    If you are in the Greater Toronto Area or Kitchener-Waterloo you might want to consider these events to network with other QA people or learn some of the new ideas in QA.

    NVP Software Solutions will be participating in the following software testing and quality assurance events happening this January in Ontario, Canada. The events are located in Toronto and Kitchener-Waterloo in the coming two weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!

    (more…)
  • Is your Software Testing Ad-Hoc?

    We wanted to start the new-year off with a topic we hear a lot about from many many people.

    Our Software Testing is Ad-hoc (i.e. created or done for a particular purpose as necessary.).

    • It is never reused.
    • We are always looking at testing each project as if it were a brand new experience.
    • Very little gets carried forward from previous projects and a lot of stuff seems to disappear

    If you have heard this or felt this way, you are not alone. The comment that “We had this somewhere but I cannot remember where or cannot find it right now” gets repeated a lot.

    The question is why does it occur. Some of the answers are below:

    • Project budgets are not built with the intent of supplying tests to later projects.
    • No-one can predict whether the same testcases will be needed in a future project
    • No-one can predict whether the testcases will be valid for a future project (may be outdated).
    • It is not possible to estimate how long it will be before an update is needed and we might re-use the testcases.

    All of the above reasons mitigate against creating and retaining robust testcases suitable for future use. The end result is ad-hoc testcases created for the project and discarded after one or a few uses.

    If you want a process that will solve this problem, come back in 2 weeks when we will provide a methodology that will solve this problem at minimal project cost and with positive ROI over the lifetime of the software.

    In the meantime, if you are in the GTA (Greater Toronto Area) or KW, see our next blog next week about the coming presentations.

    If you cannot wait for the two weeks for an answer look at some of the following information:

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • Review of the Year – Manual Testing

    Review of the Year – Manual Testing

    Manual Testing is something that seems to be on the minds of many Quality Assurance Managers and Test Leads. Usually they want out of the Manual Testing and view Automated Testing (Blog planned for next week) as the saviour of their budget and time constraints.

    However, judging by the vacant positions we get requests to fill, there is still no shortage of Manual Testing positions at least in our area. There are still a lot of requests for Manual Testers with business knowledge preferred and new software and startups still start with manual testing. We get requests for Automated Testing with specific tools usually requested and will discuss this next week.

    The part that seems to be missing from many of the requests and the subsequent position is any discussion of the How; What; Why; When; and If; of the manual testing.

    There seems to be limited thought given to How the testing is to be done apart from some vague request to build testcases and execute them.

    Little consideration is given to What to test and Why beyond the statement: “We need to test the software”.

    When and If are not such an issue: Yesterday and definitely are the one word answers to those questions.

    These answers certainly provide freedom for the tester to do what they want but that may not always align with all the stakeholder’s wishes and may be 180 degrees off in some cases.

    This leads to a poor ROI and a large waste of time and money.

    There will continue to be a market for manual testers for new changes and new applications that are not yet mainstream. We expect automation to take over many of the repetitive tasks (as has always been the case) The only open question at this stage might be what AI will do to the industry. That we cannot predict.

    Want to discuss the effectiveness of your Manual Testing further? Contact us.

  • QA with No time

    QA with No Time

    One of the constant issues that comes up in discussions or classes is the lack of time for Quality Assurance and Quality Control. We seem to be under constant time pressure and with Agile and DevOps it has become worse rather than better.

    While Quality Assurance cannot anticipate everything, the Continuous Improvement aspect can help improve the time considerations. The key is to prioritise and plan for the high priority items to be completed.

    Some people will say (with a fair amount of truth) that even planning for the priorities and only doing those high priority items will still not be possible in the time allotted for Quality Assurance and Quality Control. With development reacting to changing requirements from the users, constant upgrades in technology (both hardware and software) and impacts from other projects, the time scale can shrink radically.

    However, without a definitive list of what must be done and the risk of not completing attached, it is very hard to push back against the time pressure. With the list in hand, it is easier to quantify the risk of not doing something.

    Some people will say that they do not have the time to even create the list. They are under extreme pressure to start testing immediately and provide issues so the developers have something to handle now that they have finally delivered all the changes requested by the user. Our recommendation, in this case, is to simply make a list of the functions or requirements, assign High, Medium or Low to the risk of not testing that function or requirement and send it around for review. This will at least alert people to the challenges faced by Quality Assurance and Quality Control.

    Want to discuss your list further? Contact us.

  • A Better Way – Case Study 4 – Caught Between Vendor and Client!

    A Better Way – Case Study 3 – Test Plan in a Hurry!

    In our last several blogs we have discussed ‘A Better Way to Test”.

    The issue is to apply this to actual situations. We have 5 Case Studies we plan to use over the next several weeks to address this. The fourth case study might be called “Caught between Vendor and Client”.

    Although one could argue that Quality Assurance has always been caught between a Vendor or Vendors (developers) and possibly multiple clients (users), it has become a little more obvious and formal with the purchase of software from outside groups. This is not something that is going to go away. Assembly of a solution rather than building it has been around for quite a while and will probably become more frequent rather than less. The question is what Quality Assurance does to “Bridge the Gap” between what the client wants (‘perfection!’) and what the vendors are willing to supply in terms of proof given competitive secrets and possibly some Non-disclosure requirements.

    In our case (as discussed here) we built a plan that provided the final client with what they wanted. We then mapped what was supplied by the vendors against what was required and filled in the rest. Not surprisingly, the main items that were missing were Integration testing between what the various suppliers had provided and the type of testing that needed the entire system to be there including Performance, Security and Usability (to name a few).

    Bridging the gap in this fashion satisfied everyone and made use of everything that was already in place. That saved us a lot of time and allowed us to concentrate the tests that were critical to the client.

    If you want to discuss this further contact us.