Test Cases, Test Scripts, Test Runs, Test Sets, Test Scenarios, Test Databases, Test Plans and Test Strategies have all been used to define some type of Test Case. When it comes to popular automated test tools, some of these terms do have very specific meanings. However, others are defined within testing courses or for specifically for test certifications. The problem is that many of these definitions differ. Sometimes the difference is directly in the definition; other times it is in the use. The loose terminology can lead to problems when people are expecting something different than the test case they asked for.

For the sake of this post, we are going to define a test case as something that needs to be tested. It includes the following fields (at a minimum):

  1. Test Title
  2. Test Objective
  3. Priority
  4. Steps (or Actions)
  5. Expected Results

Obviously this is a fairly minimal list and there are many more useful fields that could be included. We often encounter Test Cases (and there’s a lot) where there are more fields included especially if they are being recorded in one of the popular test administration tools.

Whatever is included, we use a test case as a basis for what needs to be tested for any particular application under test. How we actually run that test case and the amount of detail recorded in the test case and about the run will differ from place to place depending on the criticality of the application and the time we have.

We want to finish with a few questions and ask you to post your answers.

  1. Do you write test cases?
  2. Do you get them reviewed by the relevant stakeholders?
  3. Are they approved at any point?
  4. What is your experience in how well they test what you need to test?

Next Week: KWSQATASSQ and London Peer-to-Peer.