The Scope of Testing may relate to how much testing we are going to do or it may relate to how much of the system we are planning to test. The amount of testing could be defined as doing multiple phases of testing with differing aims. The amount of the system we are planning to test and actually do test can be measured by a coverage tool. These two definitions are not independent of each other. Regardless of which definition you decide to use, Be Prepared for some arguing.
The (not entirely rhetorical) questions you will get asked include the following:
- You are planning on how much testing? (too much or too little)
- What makes you think that is enough testing? (too little)
- You are planning on testing that? (should be included or should not be included)
- On what did you base that estimate or expectation?
The questions can go on like that for ages. I have one client who does not want any errors in production. Their mantra is everything is in scope and test as much as you can. They are no more successful than any one else and sometimes less so.
Whatever your policy; the following will guide the creation of the scope:
- What must be tested?
- What can be tested (within budget and time constraints)?
- What is the best use of the resources?
Make sure to document the scope up front and get it signed off. That will reduce the problems later on and create a much more harmonious working relationship.
Of course, once we have done the scope, then we need to define the Out of Scope. More arguments are on the way! Incidentally Out of Scope is not Everything not In Scope. It must be specified.
- Do you define your Scope of Testing?
- Has it been disputed?
- What would you have done differently based on what you know now?
Next Week: Training