Tag: software testing

  • Test Conditions

    Test Conditions is a term that has multiple definitions. For the sake of this blog, we are going to define them as the equivalent of (Low Level) Test Objectives and state that they are One-Line statements of what is going to be tested. (High level Test Objectives may relate to more system level objectives and some of them may be derived from the Project Charter or plan.)

    For example, the Test Conditions may read as follows:

    1. Test that the system calculates interest correctly.
    2. Verify that the design allows for 100 simultaneous connections.
    3. Validate that the user interface is accessible according to the corporate standards.

    The question that frequently arises is why bother to write these Test Conditions? It seems like an extra step with minimal return. Why not just go directly to the Test Cases?

    We use them for a number of reasons.

    1. They allow the tester to consider the entire system rather than getting into detailed test cases at the first step.
    2. They allow other stakeholders to review the test coverage without having to read through full test cases.
    3. They can identify coverage and omissions in coverage with limited effort.
    4. They allow for estimation of the number of test cases that will be needed before the testcases are written.
    5. They allow for estimation of the test effort early in the project.
    6. They can help identify the components of the test environment earlier allowing it to be specified and built before it is needed.
    7. They determine the required test data and allow it to be gathered and made ready before testing starts.

    We have found that the effort in building test conditions is more than paid back in early information and helpful triggers for what needs to be done.

    Discussion Questions

    1. Do you write Test Conditions or Test Objectives?
    2. Were they beneficial to the project?
    3. What would you have done differently based on what you know now?

    Next Week: Process Improvement – Deal with Results

  • How interactive prototyping can improve QA in the SDLC

    It’s often said that quality must be built in, not added on. But when it comes to the Software Development Lifecycle (SDLC), the reverse often happens: defects are identified late on in the Testing Phase, after coding is done. This means bugs are expensive to fix and solutions are found last-minute, putting quality at risk. Early Lifecycle QA, from requirements definition onward, results in a better software development experience and, hopefully, a better end product.

    But even when Early Lifecycle QA does happen, it’s not always plain sailing: business requirements documents are often scanty and don’t provide QA professionals with enough information; other stakeholders may be resistant to QA specialists coming in and “telling them their job” at the review stage; some requirements are untestable thanks to lack of clarity. And of course things change throughout any project, it’s a fact. Flexibility is a must.

    So how can QA professionals ensure that they can get involved and be effective from the outset of the SDLC and throughout it? Step up interactive prototyping. Using an interactive prototyping tool can facilitate early stage QA and avoid common pain points.

    Requirements definition and gathering

    QA specialists sometimes receive little information on which to base tests at this stage, thanks to paltry requirements or incomprehensible Business Requirements Documentation (BRD). Additionally, QAs are often sent the documentation too late, meaning there’s no time to set up adequate tests. By gathering, defining and gathering requirements using a prototyping tool – requirements can be imported or created directly in the prototype, and all invited stakeholders (including QAs) can add or comment upon those requirements in real-time. Once you have the baseline of requirements, a System Testing Plan can be finalized.

    Interactive requirements and iterative process

    Once the BRD and System Requirements Specification are agreed upon, the QA team can set about reviewing requirements in the prototype. Running user test cases with a designated User Proxy – someone who takes on the role of User – will allow QA to be approached from 3 angles: functional, structural and conformance. All QA team members can add to and edit the BRD in the prototype, ensuring that user and system needs are accurately represented at this early stage.

    Using a prototyping tool to facilitate this process reduces time and budget concerns for project managers, which means they are more likely to agree to incorporating QA teams early on.

    Design and QA

    With a version history of requirements accessible within the prototype, the design team has a clear map to work off. They can build an interactive prototype based on the validated requirements, linking each feature to its relevant requirement and thereby facilitating QA testing. Once the design team has produced a high fidelity prototype, activities such as verifying system architecture and carrying out system audits can be done on the prototype. Finding and fixing bugs through prototype testing is a lot cheaper than fixing them in the code.

    Coding and Deployment

    Later SDLC stages can now go ahead, with the QA team carrying out coding-related Quality Assurance activities such as verifying implementation of top requirements, and checking the quality of code with Product Quality Analyzer tools.

    Key Success Markers

    Early Lifecycle Quality Assurance requires collaboration between teams and a shared vision, factors supported by the inclusion of interactive prototyping in the SDLC. By prioritizing Early Lifecycle QA rework and costs are reduced, QA input is incorporated at every stage of the project, and time to market is optimized.

    Justinmind is a prototyping tool for web and mobile applications that allows you to visualize your software solution before starting development

  • Test Run

    Our latest blog will discuss the Test Run. For today’s purpose, NVP considers a Test Run to be one, single execution of a testcase. This could mean that the testcase ran to completion and the expected AND actual results were identical, or that the case the testcase did not have actual results that equalled the expected. We have stayed away from the words ‘successful’ and ‘unsuccessful’ since some may feel a testcase is only successful if it uncovers a problem and is unsuccessful if it does not.

    We are interested in this statistic of test runs for a number of reasons:

    1. It helps in estimation
    2. It helps justify the time taken to test
    3. It provides a measure of code stability

    Estimation

    Knowing the number of Runs of a testcase helps determines how long the cycles and the whole test effort will take next time. If we know we had to run each testcase an average of 5 or 6 times before it ran to completion without raising an issue then we know how many times we may need to run it next time. Note that unsuccessful runs may include attempts that lead to fixing the testcase or relevant test data. Once we have ‘debugged’ the testcase, these runs may not recur.

    Justification

    If we only report the count of completed testcases with actual results equalling expected results, then each testcase might only show a single execution. This would hide a lot of work and effort and make the testers appear very unproductive. Showing that each testcase was executed 6 or 7 times before we were satisfied gives a much better idea of the effort involved.

    Code Stability

    If a testcase is run a dozen times and only on the last time does it run to completion with Expected Results equal to Actual Results, then we may have a concern with code stability or whether that final run was really correct. Something that fails a dozen times and then is successful is highly suspect. Maybe the conditions changed, maybe we missed something, maybe the issue was finally fixed. Whatever the case, we are not sure of the stability.

    Discussion Questions

    1. Do you have defined test Runs?
    2. What is the worst case for number of times they had to be run?
    3. What is your least number of runs

    Next Week: Process Improvement

  • Test Cycles

    NVP considers a Test Cycle to be one complete execution of a group of test cases. The reason we’re interested in this particular item is that it leads to estimation. The first questions in any testing project are:

    1. How long is it going to take?
    2. How much is it going to cost?
    3. When will you be done?

    These questions can be difficult to answer when starting a project as a new tester or test manager or with limited experience in the software one has been asked to test. Having test cycles helps solve that issue.

    In order to answer those questions we need to:

    1. Define the contents of the group of tests constituting the cycle
    2. Get an estimate of how long each test will take
    3. Add up the resultant times
    4. Build in some contingency
    5. Use that as an estimate for the length of the cycle

    The above gives us an estimate for the length of a single cycle.

    The next question is how many cycles will be run. Our answer is usually three at a minimum on the grounds that there are two debug cycles and hopefully a clean run. In our experience we have managed to get away with two cycles but that’s unusual. Many times it’s many more than three especially if the code is weak or the full requirements are still being worked out. Usually you will have an idea after your first test cycle as to how many will have to be run.

    In order to answer the question of when you will be done, you then need to multiply the number of projected cycles by their individual lengths, add in time for the fixes to be made and promoted and use that as an estimate of the completion date (and the cost by using the chargeback rate).

    1. Do you have defined test cycles?
    2. What is the worst case for number of times they had to be run?
    3. What is your least number of runs

    Next Week: Process Improvement

  • Quality Assurance Assessments – Part 4

    Quality Assurance Assessments take a variety of forms in an IT project and can range from very informal to very formal. This week we will discuss What to do with the results of a QA Assessment now that we have completed an explanation of HOW to do Assessment in a past blog.

    What to with the results of a QA Assessment

    There is a strong temptation to (facetiously) say Do Nothing with the Results since that happens so frequently. The Assessment is completed and everyone just wants to forget about it. Not only is that a direct waste of the effort and time included in the assessment, it also sends the signal to everyone that their effort was unnecessary and their thoughts unappreciated. Don’t expect a lot of effort next time under this scenario.

    If we use the example from the last blog (referenced above) of the questionnaire or in person interviews to elicit the information using open-ended questions, then we will end up with a lot of disparate information that may not be readily parsed.

    The steps are as follows:

    1. Review all the provided answers.
    2. During the review write down some general categories for the answers (i.e. insufficient testing; requirements issues; development issues; testing issues). If these categories were predetermined then this step does not apply.
    3. Allocate the answers into the categories.
    4. Allocate the answers that fit into more than one category (put them into both).
    5. Allocate the answers that only occur once and do not fit into any category (make a category of Other and put them there).
    6. Extract a common consensus from each category (there is a lot of work in this step)
    7. Start a process of finding the root cause of the common problems.

    Now we have to act on the root causes and resolve them. This could be a whole series of blogs but we will leave that for the process improvement cycle.

    If you are having trouble working this out, contact us and we can help guide you and your team in the right direction.

    Finally, we’ll leave you with a few questions and ask you to post your answers.

    1. Have you participated in a Test Process Assessment?
    2. Has anyone acted on the results?
    3. Were the results used for Process Improvement?

    Next Week: Vocabulary

  • Upcoming Software Testing & Quality Assurance Events – April 2016

    NVP Software Solutions will be participating in the following three software testing and quality assurance events happening this April in Ontario, Canada. The events are located in Toronto, Kitchener-Waterloo and London in the coming two weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there!

     

    Toronto Association of Systems & Software Quality

    TASSQ – Toronto Association of System and Software Quality – Everything you Wanted to Know about the CSQE! – Brenda Fisk, Director,
    ASQ Canada Deputy Regional Director 2014-2016
    Software Division, Division Executive Team – April 26, 2016 – See http://www.tassq.org/

     

     

    Software Testing in Kitchener Waterloo

     

    KWSQA – Kitchener Waterloo Software Quality Association – the bare minimum you need to know about web application security in 2016 – Ken De Souza – April 27, 2016 – See www.kwsqa.org

     

     MANHATTAN

    London Quality Assurance Peer-to-Peer Contact neil@nvp.ca for more details

  • Quality Assurance Assessments – Part 3

    Quality Assurance Assessments take a variety of forms in an IT project and can range from very informal to very formal in nature. This week we will discuss HOW to do a QA Assessment now that we have completed an explanation of WHY to do Assessment in a past blog.

    HOW Quality Assurance Assessments are Done:

    The following needs to be done in order to complete a Quality Assurance Assessment:

    1. Determine the objective of the assessment. (Refer to Why Quality Assurance Assessments are Done in a past blog).
    2. Set up a team (may be a team of 1) to do the assessment.
    3. Determine the targeted group who are going to provide information to the team.
    4. Determine the method of getting the information (questionnaire; in person interviews; survey).
    5. Using the method selected above, carry out the assessment.
    6. Collate the results.
    7. Provide a report.

    The above is a general methodology. The following is a short example of a very basic QA Assessment.

      1. The objective is to determine how well the Software Testing Process worked on the last project.
      2. The team will be the Quality Assurance department (not involved in the particular project).
      3. Target audience: Software Testers, Developers, Project Manager(s), End Users, Management, all other interested stakeholders.
      4. Methodology: Individual interviews using a questionnaire. Some sample questions follow
        • What went right with the project in terms of Software Testing?
        • What went wrong with the project in terms of Software Testing?
        • What expectations were and were not met?
        • How could the process be improved?

    This is a very small sample of questions to be answered. From there:

    1. Compile the answers into a report removing the names and any identifying comments.
    2. Create a set of recommendations based on the results.

    If you are having trouble working this out, contact us and we can help guide you and your team in the right direction.

    Finally, we’ll leave you with a few questions and ask you to post your answers.

    1. Have you participated in a Test Process Assessment?
    2. What was the justification for the Assessment?
    3. Were the results used for Process Improvement?

    Next Week: KWSQA, TASSQ and London Peer-to-Peer

  • Software Testing Strategy – The Test Plan

    Any given test plan and its contents vary widely depending on to who is doing the testing and what template is used. Test plans can be found anywhere on the web. They can be generic in nature or specific to a particular industry and have specific sections that are critical for regulatory or risk reasons. A test plan can also be seen as a strategy, while in some cases strategy is never considered. Sometimes test cases are included in a test plan.

    We consider a test plan to be a document that outlines a path for the testing but excludes the test cases (which we like to keep in a database). The test plan often includes the following (not a comprehensive list):

    1. Introduction
    2. Test Approach
    3. Assumptions and Dependencies
    4. Risks and the Risk Plan
    5. Schedule and Resources
    6. Glossary

    The Incredible Shrinking Test Plan

    We’ve seen where one client invested a lot of time creating a test plan template for use within their organization. The template with various sections indicated what the contents of that section should include, however, if the section was irrelevant,  a statement as to why it was inapplicable was to be noted. Under no condition was any section to be deleted. Unfortunately, one group misunderstood the importance of keeping ALL data and deleted any section they felt was unnecessary. They innacurately used the previous plan version as the template for the next one and so on. So every time a section was deleted it was lost forever. We used to refer to those plans as the The Incredible Shrinking Test Plan, it just got smaller and smaller.

      1. Do you write test plans?
      2. If you use a template, do you have trouble filling it out?
      3. Do you get test plans reviewed by the relevant stakeholders?
      4. Are they approved at any point?
      5. What is your experience in how well they test what you need to test?

    And most importantly: Do you ever review them after the project is over and see how well you adhered to the initial outline?

    Next Week: Assessments – How