Tag: Software Testing Strategy

  • What should be reported from your Testing Efforts – Part 1?

    This seems like an obvious question and many people have successfully conquered it and set up a standard for reporting. Every once in a while it falls apart and people get blindsided by extra requests that they did not expect but for the most part it works fairly well.

    However, even when it is well planned, there is often a substantial difference in what people expect. Some of your stakeholders will want every detail of every test result, defect, and metric that you can generate. They will look and evaluate and ask questions. Others will want to concentrate on the highlights and only investigate when there is an obvious problem that is impacting the project. The last category might be summed up by a comment I heard at one Project Meeting addressed to the test manager and coming from the biggest stakeholder: “As long as you say it is okay; I don’t want to hear anymore”. We removed the actual names to protect the guilty!

    While we will come back to this question in a couple of weeks and welcome your input in the interim, one obvious comment comes to mind immediately. Test tools have had a huge impact on this aspect of testing. The ability to record almost everything, drill down, add comments, set statuses, and move items from user to user has facilitated reporting.

    In an interesting story from one client a long time ago, the layers of management made the whole process spin out over two weeks from the time the test was actually done to the time the report was consolidated for the last layer of maangement. We have moved a long way from that!

  • Does your software Testing Pay – Part 2

    “Does your Software Testing Pay” is the question we posted two weeks ago.
    There are obviously two parts to this question, one is how much it costs and the other is how much it saves.

    The cost portion is reasonably easy:

    1. Chargeback rate on resources (means hours per release must be recorded).
    2. Equipment and space usage (if not included in the above).
    3. Test Tool cost (amortized over all the projects)
    4. Opportunity cost if the resources should be engaged in something else.

    The savings portion is somewhat harder:

    What would each found defect have cost to fix in production? This is obviously an estimate.  One calculation is supplied below.

    1. Look at each defect found by testing.
    2. Estimate the probability of it occuring after release of the code to the users. You might want to take estimates from several people and average them. Either the probabilities must be between 0 and 1 or else you need to convert to a figure between 0 and 1 before using it.
    3. Estimate the cost to the organisation under the assumption that the defect does occur in production. This cost includes the direct costs to the company (fixing, testing and deploying); the indirect costs (administration etc) that are often hidden (Iceberg – 9/10 of the costs are hidden)
    4. The cost of the customers in rework, lost data, inability to respond proerly.
    5. Add up the costs per defect and multiply by the percentage.
    6. Add up the resulting figure for all defects found by testing for a release.

    If Savings > Costs your software testing is paying for itself.

  • Does your Software Testing Pay?

    “Does your Software Testing Pay” is another question (in the series we are addressing here) that comes up quite frequently. Although it may not be stated directly it can manifest itself in the following actions:
    1. Lack of resources for testing.
    2. Lack of time provided for testing.
    3. Lack of suitable hardware on which to conduct testing
    4. Lack of software testing tools.
    5. …
    This is not a comprehensive list but it covers many of the major manifestations of not having an answer as to whether the testing pays. If the question cannot be answered, then it is difficult to justify expenses or investment in testing. In addition, some Quality Assurance people go to great lengths to avoid answering this question.
    Come back in two weeks to see one way of addressing this question. It will require research, metrics, and statistics.

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • Are you satisfied with your testing?

    Following on from the theme of last month, many organisations are dissatisfied with their testing. They feel it is incomplete, or ineffective or costs too much. At the end of the testing effort they are left with a vague feeling of unease. Often it is difficult for people to quantify their concerns but they are very real and lead to delays and ongoing expensive testing in an effort to remove this feeling.

    The trouble is that the more they do in testing, the more they may realise what they have not done. This does not increase the confidence level! Furthermore, if they do find problems during this ‘extra’ testing effort the level of confidence drops commensurately and even more testing is required until there is no energy, budget or time left.

    Come back on February 25 to see how a Quality Assurance process can address these concerns long before they become issues.

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • Is your Software Testing Ad-Hoc?

    We wanted to start the new-year off with a topic we hear a lot about from many many people.

    Our Software Testing is Ad-hoc (i.e. created or done for a particular purpose as necessary.).

    • It is never reused.
    • We are always looking at testing each project as if it were a brand new experience.
    • Very little gets carried forward from previous projects and a lot of stuff seems to disappear

    If you have heard this or felt this way, you are not alone. The comment that “We had this somewhere but I cannot remember where or cannot find it right now” gets repeated a lot.

    The question is why does it occur. Some of the answers are below:

    • Project budgets are not built with the intent of supplying tests to later projects.
    • No-one can predict whether the same testcases will be needed in a future project
    • No-one can predict whether the testcases will be valid for a future project (may be outdated).
    • It is not possible to estimate how long it will be before an update is needed and we might re-use the testcases.

    All of the above reasons mitigate against creating and retaining robust testcases suitable for future use. The end result is ad-hoc testcases created for the project and discarded after one or a few uses.

    If you want a process that will solve this problem, come back in 2 weeks when we will provide a methodology that will solve this problem at minimal project cost and with positive ROI over the lifetime of the software.

    In the meantime, if you are in the GTA (Greater Toronto Area) or KW, see our next blog next week about the coming presentations.

    If you cannot wait for the two weeks for an answer look at some of the following information:

    Contact us or join the discussion.
    Past Blogs
    Monthly Newsletter

  • Review of the Year – Manual Testing

    Review of the Year – Manual Testing

    Manual Testing is something that seems to be on the minds of many Quality Assurance Managers and Test Leads. Usually they want out of the Manual Testing and view Automated Testing (Blog planned for next week) as the saviour of their budget and time constraints.

    However, judging by the vacant positions we get requests to fill, there is still no shortage of Manual Testing positions at least in our area. There are still a lot of requests for Manual Testers with business knowledge preferred and new software and startups still start with manual testing. We get requests for Automated Testing with specific tools usually requested and will discuss this next week.

    The part that seems to be missing from many of the requests and the subsequent position is any discussion of the How; What; Why; When; and If; of the manual testing.

    There seems to be limited thought given to How the testing is to be done apart from some vague request to build testcases and execute them.

    Little consideration is given to What to test and Why beyond the statement: “We need to test the software”.

    When and If are not such an issue: Yesterday and definitely are the one word answers to those questions.

    These answers certainly provide freedom for the tester to do what they want but that may not always align with all the stakeholder’s wishes and may be 180 degrees off in some cases.

    This leads to a poor ROI and a large waste of time and money.

    There will continue to be a market for manual testers for new changes and new applications that are not yet mainstream. We expect automation to take over many of the repetitive tasks (as has always been the case) The only open question at this stage might be what AI will do to the industry. That we cannot predict.

    Want to discuss the effectiveness of your Manual Testing further? Contact us.

  • QA with No time

    QA with No Time

    One of the constant issues that comes up in discussions or classes is the lack of time for Quality Assurance and Quality Control. We seem to be under constant time pressure and with Agile and DevOps it has become worse rather than better.

    While Quality Assurance cannot anticipate everything, the Continuous Improvement aspect can help improve the time considerations. The key is to prioritise and plan for the high priority items to be completed.

    Some people will say (with a fair amount of truth) that even planning for the priorities and only doing those high priority items will still not be possible in the time allotted for Quality Assurance and Quality Control. With development reacting to changing requirements from the users, constant upgrades in technology (both hardware and software) and impacts from other projects, the time scale can shrink radically.

    However, without a definitive list of what must be done and the risk of not completing attached, it is very hard to push back against the time pressure. With the list in hand, it is easier to quantify the risk of not doing something.

    Some people will say that they do not have the time to even create the list. They are under extreme pressure to start testing immediately and provide issues so the developers have something to handle now that they have finally delivered all the changes requested by the user. Our recommendation, in this case, is to simply make a list of the functions or requirements, assign High, Medium or Low to the risk of not testing that function or requirement and send it around for review. This will at least alert people to the challenges faced by Quality Assurance and Quality Control.

    Want to discuss your list further? Contact us.

  • A Better Way – Case Study 5 – Distributed with Poor Testing

    A Better Way – Case Study 5 – Distributed with Poor Testing

    In our last several blogs we have discussed ‘A Better Way to Test”.

    The issue is to apply this to actual situations. We have 5 Case Studies we plan to use over the next several weeks to address this. The fifth case study might be called “Distributed with Poor Testing”.

    In this case the developers were geographically remote, the Quality Assurance (such as it was) was local and the clients were geographically remote. This was a somewhat unusual situation since it is frequently the Quality Assurance that is remote and the other two local. However, some of the considerations had to be the same. We needed a way of communicating things without losing anything in transit.

    Two solutions came to mind immediately:

    1. Make sure there are stated expectations regarding the flow of information. This applies to any artifact like a test case, a requirement, a defect, or a design document. We did this via some flow charts we placed in the Master Test Plan although there are certainly other places they could have been recorded.
    2. Acquire some tools that will support this information store. There are many free or very cheap cloud-based tools that support these processes.

    Once we had the framework in place, we set up the appropriate loops and feedback mechanisms to ensure information flow and good quality. As time went on we expanded the groups who had access to the systems ensuring that the information flowed in the correct way and with sufficient security to the concerned parties. The two way flow of information allowed us to eliminate a backlog of defects that had accumulated and start addressing customer concerns in a timely manner.

    The investment was not large when compared to the improvements realised.

    If you want to discuss this further contact us.