Tag: Quality Control

  • Scheduling Test Cycles

    Scheduling Test Cycles often seems to create challenges for Managers, so we thought we’d tackle this for today’s blog. In our experience, there seems to be an ingrained view from Test Managers and Development Managers to not leave time between the Test Cycles or the Fix Cycles for the other party to do their work.

    I have seen Test Cycles scheduled consecutively with no room to actually fix anything. The idea was that they could fix the bug overnight or during the weekend because nothing could impede the test effort at this stage. The alternate problem is scheduling by the Development Manager who puts all the time into the Fix or upgrade time and allocates nothing for testing. The same question elicits a similar answer that testing can proceed overnight or on weekends.

    What is obvious is that there has to be compromise on both sides.

    However, it is possible to schedule overlap. Certainly Developers can start fixing bugs found early in the test cycle. before the cycle is finished, and it’s probably better that they do. However, this requires strong promotion and code control procedures and a plan on how the environments are going to organize. Otherwise, fixes start getting into the test environment before other testing is done. Similarly testing can continue even when Developers are in fix-mode. Planning is required to cover items they aren’t working on at the moment.

    We just need to plan our way through this with the understanding that there will be changes as the project evolves, dependencies arise, and items change.

    Discussion Questions

    1. Have you Scheduled Test Cycles?
    2. If yes, to number 1, how did it work out?
    3. What would you have done differently based on what you know now?

    Next Week: Final meetings for the year

  • Test Run

    Our latest blog will discuss the Test Run. For today’s purpose, NVP considers a Test Run to be one, single execution of a testcase. This could mean that the testcase ran to completion and the expected AND actual results were identical, or that the case the testcase did not have actual results that equalled the expected. We have stayed away from the words ‘successful’ and ‘unsuccessful’ since some may feel a testcase is only successful if it uncovers a problem and is unsuccessful if it does not.

    We are interested in this statistic of test runs for a number of reasons:

    1. It helps in estimation
    2. It helps justify the time taken to test
    3. It provides a measure of code stability

    Estimation

    Knowing the number of Runs of a testcase helps determines how long the cycles and the whole test effort will take next time. If we know we had to run each testcase an average of 5 or 6 times before it ran to completion without raising an issue then we know how many times we may need to run it next time. Note that unsuccessful runs may include attempts that lead to fixing the testcase or relevant test data. Once we have ‘debugged’ the testcase, these runs may not recur.

    Justification

    If we only report the count of completed testcases with actual results equalling expected results, then each testcase might only show a single execution. This would hide a lot of work and effort and make the testers appear very unproductive. Showing that each testcase was executed 6 or 7 times before we were satisfied gives a much better idea of the effort involved.

    Code Stability

    If a testcase is run a dozen times and only on the last time does it run to completion with Expected Results equal to Actual Results, then we may have a concern with code stability or whether that final run was really correct. Something that fails a dozen times and then is successful is highly suspect. Maybe the conditions changed, maybe we missed something, maybe the issue was finally fixed. Whatever the case, we are not sure of the stability.

    Discussion Questions

    1. Do you have defined test Runs?
    2. What is the worst case for number of times they had to be run?
    3. What is your least number of runs

    Next Week: Process Improvement

  • Test Cycles

    NVP considers a Test Cycle to be one complete execution of a group of test cases. The reason we’re interested in this particular item is that it leads to estimation. The first questions in any testing project are:

    1. How long is it going to take?
    2. How much is it going to cost?
    3. When will you be done?

    These questions can be difficult to answer when starting a project as a new tester or test manager or with limited experience in the software one has been asked to test. Having test cycles helps solve that issue.

    In order to answer those questions we need to:

    1. Define the contents of the group of tests constituting the cycle
    2. Get an estimate of how long each test will take
    3. Add up the resultant times
    4. Build in some contingency
    5. Use that as an estimate for the length of the cycle

    The above gives us an estimate for the length of a single cycle.

    The next question is how many cycles will be run. Our answer is usually three at a minimum on the grounds that there are two debug cycles and hopefully a clean run. In our experience we have managed to get away with two cycles but that’s unusual. Many times it’s many more than three especially if the code is weak or the full requirements are still being worked out. Usually you will have an idea after your first test cycle as to how many will have to be run.

    In order to answer the question of when you will be done, you then need to multiply the number of projected cycles by their individual lengths, add in time for the fixes to be made and promoted and use that as an estimate of the completion date (and the cost by using the chargeback rate).

    1. Do you have defined test cycles?
    2. What is the worst case for number of times they had to be run?
    3. What is your least number of runs

    Next Week: Process Improvement

  • Upcoming Software Testing & Quality Assurance Events – April 2016

    NVP Software Solutions will be participating in the following three software testing and quality assurance events happening this April in Ontario, Canada. The events are located in Toronto, Kitchener-Waterloo and London in the coming two weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!

     

    Toronto Association of Systems & Software Quality

    TASSQ – Toronto Association of System and Software Quality – Everything you Wanted to Know about the CSQE! – Brenda Fisk, Director,
    ASQ Canada Deputy Regional Director 2014-2016
    Software Division, Division Executive Team – April 26, 2016 – See http://www.tassq.org/

     

     

    Software Testing in Kitchener Waterloo

     

    KWSQA – Kitchener Waterloo Software Quality Association – the bare minimum you need to know about web application security in 2016 – Ken De Souza – April 27, 2016 – See www.kwsqa.org

     

     MANHATTAN

    London Quality Assurance Peer-to-Peer Contact neil@nvp.ca for more details

  • Testing for System Integrators – Part 5

    Last week our blog discussed the remaining answers to the questions and promised that we would look in detail at two of the answers (which are somewhat similar so we will concentrate on one only).

    There is nothing in the contract (contract is signed) and there is no intention of putting anything in the contract about Quality Assurance.
    Now you have a challenge. Clearly the process is mostly done and there is absolutely no buy-in to Quality Assurance. The next question that needs to be asked is “Why have you brought Quality Assurance in if there is no interest?”

    The key steps here are to determine your position and map out your strategy. There is any number of answers to the question of “Why have you brought Quality Assurance in if there is no interest”.

    1. The final client has belatedly required it. I.e. they have realised it is an omission from the contract and now feel it is incumbent on the System Integrator to provide this as part of the deliverables. You need to determine the final clients needs and work towards those.
    2. The solution is more complex than the System Integrator thought and now they feel a need for Quality Assurance. I.e. like the client above they have realised the value provided by Quality Assurance and now want to implement it even though they were trying to avoid it earlier. There is probably still little buy-in from most of the group. You need to look at each of the Stakeholders and determine their status vis-a-vis Quality Assurance and plan to convert them all to supporters. This is a crucial piece of your strategy in order to be successful.
    3. The System Integrator’s management is becoming nervous and wants Quality Assurance there as a check. While you have management support the team may feel they have an extra burden and possibly someone who is watching them. As in the above, you will need to look at the Stakeholders and see how to convert them to supporters. Otherwise you will get no information at all.
    4. The last possibility is they want someone to blame. This is a tricky one. No matter what you do (either proactively or reactively) they may blame you. You need to plan carefully in order to make sure that your work is recognised as contributing to the success of the project. You need to be very proactive in stating what needs to be done; why it needs to be done and the benefits accruing from having it done. And make sure that everything is documented!

      Happy New year

  • Testing for System Integrators – Part 3

    Over the next few weeks, the NVP blog will focus on Software Testing for System Integrators. From NVP’s point of view, a System Integrator is someone who brings together a number of applications (from vendors), adds some glue and ends up with a solution for the organization they are working with. This seems to agree with the Wikipedia definition fairly closely. So where does Quality Assurance come into this? One would like to think early or very early in the process but that’s not always the case.

    Last week we provided several possible answers to our original

    1. The contract states the following specifically about Quality Assurance and everyone is in agreement
      This means that you simply have to “bridge the gap” between what is expected from the vendors and what is promised to the final client. The only problem may be that you do not agree with the contracted items.
    2. The contract says nothing about Quality Assurance but it’s noted as a topic and the contract will not be finalized without this discussion
      This is almost the best situation. While it may be a little late in the process, the willingness to add Quality Assurance exists and people are behind it.
    3. The contract says nothing about Quality Assurance so far, but now that you have brought it up we will add it.
      The same comment as above is applicable except that there is not quite the backing we might have had earlier.
    4. There is something in the contract about Quality Assurance and we can look it up for you (contracts are signed).
      Well at least they considered it; it may not be correct or complete but it was not entirely ignored. Once you find out what is in the contract you may (or may not) have concerns to handle.
    5. There is nothing in the contract (contract is signed) and there is no intention of putting anything in the contract about Quality Assurance
    6. We don’t know (but that is a good question)
    7. We don’t know (and we don’t care)

    Suffice to say the items in the above list have an obvious gradation from very manageable to a real challenge in the order they are presented. If you get the first answer, you’re well on your way. If you get some of the middle answers you have some work to do, but there’s still time to make change. If you get the last few answers, you are in trouble but not defeated!

    Next Week: What to do with the answers (remainder).

  • Testing for System Integrators – Part 2

    Over the next few weeks, the NVP blog will focus on Software Testing for System Integrators. From NVP’s point of view, a System Integrator is someone who brings together a number of applications (from vendors), adds some glue and ends up with a solution for the organization they are working with. This seems to agree with the Wikipedia definition fairly closely. So where does Quality Assurance come into this? One would like to think early or very early in the process but that’s not always the case.

    Last week we asked two $10,000 dollar questions and this week we promised the possible answers. Unlike the questions, which are specific to the final client and the suppliers, the answers are more general and apply to both.

    1. The contracts state the following specifically about Quality Assurance and everyone is in agreement
    2. The contracts says nothing about it so far but we have Quality Assurance as a topic and the contract will not be finalised without this discussion
    3. The contracts says nothing about Quality Assurance so far but now that you have brought it up we will add it.
    4. There is something in the contracts about Quality Assurance and we can look it up for you (contracts are signed).
    5. There is nothing in the contracts (contracts are signed) and there is no intention of putting anything in the contracts about Quality Assurance
    6. We don’t know (but that is a good question)
    7. We don’t know (and we don’t care)

    Suffice to say the items in the above list have an obvious gradation from good to terrible in the order they are presented. If you get the first answer, you’re well on your way. If you get some of the middle answers you have some work to do, but you may be in time to effect some change. If you get the last few answers, you are in trouble!

    Next Week: What to do with the answers.

  • Testing for System Integrators – Part 1

    Over the next few weeks, the NVP blog will focus on Software Testing for System Integrators. From NVP’s point of view, a System Integrator is someone who brings together a number of applications (from vendors), adds some glue and ends up with a solution for the organization they are working with. This seems to agree with the Wikipedia definition fairly closely. So where does Quality Assurance come into this? One would like to think early or very early in the process but that’s not always the case.

    The $10,000 Quality Assurance question.

    Last week we promised the $10,000 Quality Assurance question. But in fact, there are actually two questions with several variants and here they are:

    1. What did you request from your vendors in terms of Quality Assurance?
    2. What did you promise your final client in terms of Quality Assurance?

    Taking the first question into consideration, here are other things to consider:

    • What does the contract state about Quality Assurance or Software Testing?
    • Are the expectations documented?
    • What can we demand in terms of proof?
    • What are the System Integrator’s deliverables to the vendors?

    Now take the second question about promises to the final client, and the above becomes reversed.

    • What does the contract state about Quality Assurance or Software Testing?
    • Are there expectations documented?
    • What is expected in terms of proof?
    • What are the System Integrator’s deliverables to the final client?

    A System Integrator often acts a bridge with some support provided, but if the one requiring the testing cannot answer the above two questions and the subsidiary questions as outlined, then more research has to be done before the project starts. Without answers to these two questions, it will be very difficult to get started on Quality Assurance activities and be successful in the process.

    Next Week: Answers to those questions…