Tag: Test Cases

  • A Better Way to Test – 7

    A better way to Test – 7

    In our last blog we discussed, testing to risk.

    One of the methods we mentioned to address Risk was testing, even if it was not the only method. However, stating that we should test and actually completing the task are two different items. Testcases have to be written or at least understood. Results need to be gathered (in some fashion) and reports produced. Whether you are working very formally and need everything recorded or can just rely on what you have seen and experienced without proof does not matter. You will have to do some testing and to do testing you need some sort of statement of what you are going to do and accomplish.

    We recommend building test conditions or test objectives in order to determine what is going to be tested. These don’t have to be in detail or have all the components of a standard testcase as long as they are commensurate with the risk of the project. This key point seems to be missed quite frequently. People (in particular those who are under time and budget pressure) want to reduce the level and detail of testing until it does not sufficiently address the risk.

    The above reason is why we suggest connecting the testcase to the risk where possible. That makes it much easier to calculate the risk of doing or not doing a particular testcase during a testing cycle. It also, coincidentally, feeds your regression base immediately and allows proper selection when the time runs out.

    Take a look at some of the seminars that we offer that address this situation and see if they apply to you. Testing can be better.
    Contact us for further information.

  • Review Requirements (by Writing Testcases)

    This is the second in the series about reviewing requirements. This week we are considering the method of using Testcases to review the requirements. Some people would feel that this is too early in the process to be considering testcases but we do not have to write complete testcases. Test conditions or objectives will be enough at the start and the payback is substantial.

    As we mentioned in the last blog, reviewing requirements can be surprisingly difficult for some people. The following problems may arise:

    • You don’t know anything about the system under review
    • The requirements are disorganised
    • It is difficult to maintain concentration for an extended period of time

    The act of writing testcases (conditions or objectives) and attaching them to the requirement for which they were written (Traceability) has the impact of clarifying thoughts on what the requirement really means. Most requirements generate multiple testcases. In order to write even the start of the testcases, we need to be able to decompose the requirement into its component parts and understand each one. The process of doing this will highlight the deficiencies in the requirements.

    1. If our test conditions contradict each other then the requirements are probably inconsistent.
    2. If our test conditions don’t seem to cover everything, then the requirements are probably incomplete.
    3. If our test conditions expect to test items that are clearly wrong then the underlying requirements are probably wrong.
    4. If our test conditions seem unclear when completed, then the requirements are probably just as unclear.
    5. We could draw other conclusions but these are the main ones for requirements.

    The process here is used to clarify the thought processes.

    Take a look at some of the seminars that we offer that address this situation and see if they apply to your situation. Requirements review is very cost-effective.

    Contact us for further information.

  • Debugging versus Scenario Testing

    Recently someone in a class asked about the difference between debugging and scenario testing given the fact that many errors that need to be fixed must be completed in the code.

    If we make the assumption that we are doing a calculation of some sort or running an algorithm then the following applies.

    During User Acceptance Testing – the test should work end-to-end without an error in actual execution. The appearance, formatting or placement of the result may be incorrect but there should be no problems in running the actual process. This applies to normal cases. Clearly there is a possibility of issues with the unique cases that may have been identified late in the process. If the algorithm or test case does not run or cannot be completed then we are probably debugging rather than scenario testing.

    During System Testing – the calculation should complete although the result may be wrong and one or more steps along the way may provide incorrect results. So the results should be available and the algorithm complete although the results may be wrong. System Testing is aimed at Requirements so this is where we should whether the results agree with the requirements. If the individual test cases do not run or cannot be completed then we are probably debugging rather than scenario testing.

    During Integration Testing – each individual piece (validation of a piece of the overall algorithm) should run but they may not be able to work together. If the individual pieces do not work, then we are debugging. If they cannot be run as an integrated whole, then we are testing.

    During Unit Testing – we are debugging each individual piece of the algorithm. This is code debugging and is not expected to be anything else.

    Contact us to see whether you are scenario testing of debugging.

    Diagram by Veronica

  • Test Conditions

    Test Conditions is a term that has multiple definitions. For the sake of this blog, we are going to define them as the equivalent of (Low Level) Test Objectives and state that they are One-Line statements of what is going to be tested. (High level Test Objectives may relate to more system level objectives and some of them may be derived from the Project Charter or plan.)

    For example, the Test Conditions may read as follows:

    1. Test that the system calculates interest correctly.
    2. Verify that the design allows for 100 simultaneous connections.
    3. Validate that the user interface is accessible according to the corporate standards.

    The question that frequently arises is why bother to write these Test Conditions? It seems like an extra step with minimal return. Why not just go directly to the Test Cases?

    We use them for a number of reasons.

    1. They allow the tester to consider the entire system rather than getting into detailed test cases at the first step.
    2. They allow other stakeholders to review the test coverage without having to read through full test cases.
    3. They can identify coverage and omissions in coverage with limited effort.
    4. They allow for estimation of the number of test cases that will be needed before the testcases are written.
    5. They allow for estimation of the test effort early in the project.
    6. They can help identify the components of the test environment earlier allowing it to be specified and built before it is needed.
    7. They determine the required test data and allow it to be gathered and made ready before testing starts.

    We have found that the effort in building test conditions is more than paid back in early information and helpful triggers for what needs to be done.

    Discussion Questions

    1. Do you write Test Conditions or Test Objectives?
    2. Were they beneficial to the project?
    3. What would you have done differently based on what you know now?

    Next Week: Process Improvement – Deal with Results

  • Test Cases

    Test Cases, Test Scripts, Test Runs, Test Sets, Test Scenarios, Test Databases, Test Plans and Test Strategies have all been used to define some type of Test Case. When it comes to popular automated test tools, some of these terms do have very specific meanings. However, others are defined within testing courses or for specifically for test certifications. The problem is that many of these definitions differ. Sometimes the difference is directly in the definition; other times it is in the use. The loose terminology can lead to problems when people are expecting something different than the test case they asked for.

    For the sake of this post, we are going to define a test case as something that needs to be tested. It includes the following fields (at a minimum):

    1. Test Title
    2. Test Objective
    3. Priority
    4. Steps (or Actions)
    5. Expected Results

    Obviously this is a fairly minimal list and there are many more useful fields that could be included. We often encounter Test Cases (and there’s a lot) where there are more fields included especially if they are being recorded in one of the popular test administration tools.

    Whatever is included, we use a test case as a basis for what needs to be tested for any particular application under test. How we actually run that test case and the amount of detail recorded in the test case and about the run will differ from place to place depending on the criticality of the application and the time we have.

    We want to finish with a few questions and ask you to post your answers.

    1. Do you write test cases?
    2. Do you get them reviewed by the relevant stakeholders?
    3. Are they approved at any point?
    4. What is your experience in how well they test what you need to test?

    Next Week: KWSQATASSQ and London Peer-to-Peer.

  • Examples of Verification in Software Testing

    In our last blog post, we included a list of Examples of Verification without going into too much detail. In this blog we will take a couple of those verification list items and expand on them. The examples of verification that most tend to impact software testers with the best return are Test Plans, Test Cases, Test Data and Test Results.

    Test Plans

    Applying verification techniques to a Test Plan can save hours of effort. The three methods we can use to review a test plan are:

      • Walkthrough – in this method, the author of the Test Plan ‘walks’ one or more of their peers through the test plan, explaining the sections and what is meant by the content. The role of the peers is to find problems, omissions, extra content, incomplete items, inconsistent items and to add things that they might feel were missed. The intent is to have a better document more accurately reflecting the needs of the project stakeholders. A new version is issued after all the errors are corrected and it is used going forward.
      • Document Review – In this method, a review committee (preferably selected from the interested stakeholders) reviews the documents and records the same items as listed above for the walkthrough. Once they are finished their individual reviews, they come together to create a final list of problems which need to be corrected. A new version is issued after all the errors are corrected and it is used going forward.
      • Inspections – In this method, formalized roles are defined and assigned and a procedure is followed to ensure proper inspection of the document. The intent and result is the same as the previous two methods. The only difference is the degree of formality.

    What’s the point in all of this? The payback from reviewing the plan (using any of the methods above, more than pays for itself in terms of less errors going forwrd, less work to be undone, redone and redone (again).

    If you know your test plan is poor, or you’re not even sure where to start give us a call at 416-927-0960 or visit our website at NVP.ca to find out where you would benefit from the implementation of Verification techniques in your organisation.