Category: Uncategorized

  • Scope of Testing

    The Scope of Testing may relate to how much testing we are going to do or it may relate to how much of the system we are planning to test. The amount of testing could be defined as doing multiple phases of testing with differing aims. The amount of the system we are planning to test and actually do test can be measured by a coverage tool. These two definitions are not independent of each other. Regardless of which definition you decide to use, Be Prepared for some arguing.

    The (not entirely rhetorical) questions you will get asked include the following:

    1. You are planning on how much testing? (too much or too little)
    2. What makes you think that is enough testing? (too little)
    3. You are planning on testing that? (should be included or should not be included)
    4. On what did you base that estimate or expectation?

    The questions can go on like that for ages. I have one client who does not want any errors in production. Their mantra is everything is in scope and test as much as you can. They are no more successful than any one else and sometimes less so.

    Whatever your policy; the following will guide the creation of the scope:

    1. What must be tested?
    2. What can be tested (within budget and time constraints)?
    3. What is the best use of the resources?

    Make sure to document the scope up front and get it signed off. That will reduce the problems later on and create a much more harmonious working relationship.

    Of course, once we have done the scope, then we need to define the Out of Scope. More arguments are on the way! Incidentally Out of Scope is not Everything not In Scope. It must be specified.

    Discussion Questions

    1. Do you define your Scope of Testing?
    2. Has it been disputed?
    3. What would you have done differently based on what you know now?

    Next Week: Training

  • Test Training

    Training seems like an obvious topic and not one to which a blog or two could be usefully devoted. However we get a surprising number of questions about training and plan to address a few of them here. The first one is what type of training is offered. We define three broad categories here:

    1. Training related to testing.
    2. Training related to a particular Test Tool.
    3. Application related Training.

    You only have to read the job advertisements to see the expectations related to open positions. You may see a long list of test tools with which the applicant is to be proficient. You will most likely see some reference to a Test Methodology or SDLC. Most job advertisements finish off with some soft skills.

    So how do our three categories relate to day-to-day work?

    Taking them in reverse order:

    Application related Training

    Clearly the more the person knows about the application for which the system was built, the easier it is to understand the risks, define the scope of testing and explain the results to the business. It is also easier to understand the business requirements and expectations.

    Training related to a particular Test Tool

    This type of training is usually supplied by a vendor and can range from an overview of the test tool allowing one to to use it without in-depth knowledge all the way to becoming a technical expert. The only comment is that every tool has been superceded by something else eventually so every tool or technical process will eventually become redundant.

    Training related to testing

    This type of training covers the rest of the requirements. It teaches about SDLC, Communication, Risk, Planning, and Testing to name only a few items.

    Discussion Questions

    1. Do you participate in Training for Testing?
    2. Was it beneficial to the project?
    3. What would you have done differently based on what you know now?

    Next Week: Sources of Information

  • Test Conditions

    Test Conditions is a term that has multiple definitions. For the sake of this blog, we are going to define them as the equivalent of (Low Level) Test Objectives and state that they are One-Line statements of what is going to be tested. (High level Test Objectives may relate to more system level objectives and some of them may be derived from the Project Charter or plan.)

    For example, the Test Conditions may read as follows:

    1. Test that the system calculates interest correctly.
    2. Verify that the design allows for 100 simultaneous connections.
    3. Validate that the user interface is accessible according to the corporate standards.

    The question that frequently arises is why bother to write these Test Conditions? It seems like an extra step with minimal return. Why not just go directly to the Test Cases?

    We use them for a number of reasons.

    1. They allow the tester to consider the entire system rather than getting into detailed test cases at the first step.
    2. They allow other stakeholders to review the test coverage without having to read through full test cases.
    3. They can identify coverage and omissions in coverage with limited effort.
    4. They allow for estimation of the number of test cases that will be needed before the testcases are written.
    5. They allow for estimation of the test effort early in the project.
    6. They can help identify the components of the test environment earlier allowing it to be specified and built before it is needed.
    7. They determine the required test data and allow it to be gathered and made ready before testing starts.

    We have found that the effort in building test conditions is more than paid back in early information and helpful triggers for what needs to be done.

    Discussion Questions

    1. Do you write Test Conditions or Test Objectives?
    2. Were they beneficial to the project?
    3. What would you have done differently based on what you know now?

    Next Week: Process Improvement – Deal with Results

  • Quality Assurance Process Improvement – Part 4

    Quality Assurance Process Improvement is the current topic for our NVP Blog. We completed a series of 4 blogs on Assessments because at the end of the Assessment process a lot of organizations won’t act on the Assessment results if they don’t have a plan for moving forward. This is particularly true if the Assessment has not been tailored to the particular company in question. A standard Assessment process generates standard recommendations which may not be applicable. Make sure you detail your expectations at the beginning of the Assessment so you get value from the process and your expenditure of time.

    Last blog focused on How to do Process Improvement and now we’ll address “Dealing with the results”.  Many people complete a process improvement assessment; discover a number of problems that need to be fixed but then drop the process without full solving the issues or taking advantage of all of the work that went into getting the results. This typically happens because of the following:

    • The work that needs to be done isn’t scalable or fun.
    • There’s no one to do the work.
    • There’s no budget for the implementation.
    • What was discovered is so unexpected that no one knows how to tackle it.

    These can all be addressed by the ‘divide and conquer’ methodology.

    Once the results of the assessment are known, they need to be organized into logical buckets. Each bucket is then assigned a set of tasks. Some people will tell you that we need to identify the synergies so that everything gets accomplished efficiently with minimal disruption. While that would be the optimal way of doing things; it is rare for anyone to be able to identify all the synergies simply by looking at the list of results of an assessment. We have to accept some redundancy and the fact that some items are going to have to be reversed when new ones are put in place.

    Now is the time to implement your results.

    Next Week: Scope of Testing

  • How interactive prototyping can improve QA in the SDLC

    It’s often said that quality must be built in, not added on. But when it comes to the Software Development Lifecycle (SDLC), the reverse often happens: defects are identified late on in the Testing Phase, after coding is done. This means bugs are expensive to fix and solutions are found last-minute, putting quality at risk. Early Lifecycle QA, from requirements definition onward, results in a better software development experience and, hopefully, a better end product.

    But even when Early Lifecycle QA does happen, it’s not always plain sailing: business requirements documents are often scanty and don’t provide QA professionals with enough information; other stakeholders may be resistant to QA specialists coming in and “telling them their job” at the review stage; some requirements are untestable thanks to lack of clarity. And of course things change throughout any project, it’s a fact. Flexibility is a must.

    So how can QA professionals ensure that they can get involved and be effective from the outset of the SDLC and throughout it? Step up interactive prototyping. Using an interactive prototyping tool can facilitate early stage QA and avoid common pain points.

    Requirements definition and gathering

    QA specialists sometimes receive little information on which to base tests at this stage, thanks to paltry requirements or incomprehensible Business Requirements Documentation (BRD). Additionally, QAs are often sent the documentation too late, meaning there’s no time to set up adequate tests. By gathering, defining and gathering requirements using a prototyping tool – requirements can be imported or created directly in the prototype, and all invited stakeholders (including QAs) can add or comment upon those requirements in real-time. Once you have the baseline of requirements, a System Testing Plan can be finalized.

    Interactive requirements and iterative process

    Once the BRD and System Requirements Specification are agreed upon, the QA team can set about reviewing requirements in the prototype. Running user test cases with a designated User Proxy – someone who takes on the role of User – will allow QA to be approached from 3 angles: functional, structural and conformance. All QA team members can add to and edit the BRD in the prototype, ensuring that user and system needs are accurately represented at this early stage.

    Using a prototyping tool to facilitate this process reduces time and budget concerns for project managers, which means they are more likely to agree to incorporating QA teams early on.

    Design and QA

    With a version history of requirements accessible within the prototype, the design team has a clear map to work off. They can build an interactive prototype based on the validated requirements, linking each feature to its relevant requirement and thereby facilitating QA testing. Once the design team has produced a high fidelity prototype, activities such as verifying system architecture and carrying out system audits can be done on the prototype. Finding and fixing bugs through prototype testing is a lot cheaper than fixing them in the code.

    Coding and Deployment

    Later SDLC stages can now go ahead, with the QA team carrying out coding-related Quality Assurance activities such as verifying implementation of top requirements, and checking the quality of code with Product Quality Analyzer tools.

    Key Success Markers

    Early Lifecycle Quality Assurance requires collaboration between teams and a shared vision, factors supported by the inclusion of interactive prototyping in the SDLC. By prioritizing Early Lifecycle QA rework and costs are reduced, QA input is incorporated at every stage of the project, and time to market is optimized.

    Justinmind is a prototyping tool for web and mobile applications that allows you to visualize your software solution before starting development

  • Quality Assurance Process Improvement – Part 3

    Quality Assurance Process Improvement is the current topic in our Blog Series. We completed a series of 4 blogs on Assessments because at the end of the Assessment process a lot of organizations won’t act on the Assessment results if they don’t have a plan for moving forward. This is particularly true if the Assessment has not been tailored to the particular company in question. A standardized Assessment process generates standard recommendations which may not be applicable. Make sure you detail your expectations at the beginning of the Assessment so you get value from the process and your expenditure of time.

    Last time we looked at Why Carry Out Process Improvement and now want to address the question of “How to do it”. First we need to identify the processes that we want to improve. We will assume that this step has been completed already. Then we need to measure the existing process for what we want to improve. If we want it to be faster then we need to measure the time it takes. If we want it to be more consistent, then we need the measure the output against some standard. Once we have a baseline of measurements (you will probably be measuring more than one item since we do not want to improve one at the expense of another), then we can decide how we want to improve it. If we want the process to be faster, we might try to get the inputs to it at the right time or before they are needed. We might try cutting out any extra steps. While doing this we do not want to reduce the quality of the output so we will want to measure that as well. if you want it to be more consistent you might look for places where the product deviates from standard and try to improve those while making sure that does not make any other pieces of the product worse.

    The post improvement measurements should always be carried out after the process has had a chance to settle down and achieve some stability. Measuring too soon may lead to erroneous conclusions.

    The objective is to save the overall organization funds; not just the current project. As such the results of an Assessment and the activities done as part of Process Improvement must be assessed at the corporate level and not at the project level.

    Next Week: Guest Blog

  • Scheduling Test Cycles

    Scheduling Test Cycles often seems to create challenges for Managers, so we thought we’d tackle this for today’s blog. In our experience, there seems to be an ingrained view from Test Managers and Development Managers to not leave time between the Test Cycles or the Fix Cycles for the other party to do their work.

    I have seen Test Cycles scheduled consecutively with no room to actually fix anything. The idea was that they could fix the bug overnight or during the weekend because nothing could impede the test effort at this stage. The alternate problem is scheduling by the Development Manager who puts all the time into the Fix or upgrade time and allocates nothing for testing. The same question elicits a similar answer that testing can proceed overnight or on weekends.

    What is obvious is that there has to be compromise on both sides.

    However, it is possible to schedule overlap. Certainly Developers can start fixing bugs found early in the test cycle. before the cycle is finished, and it’s probably better that they do. However, this requires strong promotion and code control procedures and a plan on how the environments are going to organize. Otherwise, fixes start getting into the test environment before other testing is done. Similarly testing can continue even when Developers are in fix-mode. Planning is required to cover items they aren’t working on at the moment.

    We just need to plan our way through this with the understanding that there will be changes as the project evolves, dependencies arise, and items change.

    Discussion Questions

    1. Have you Scheduled Test Cycles?
    2. If yes, to number 1, how did it work out?
    3. What would you have done differently based on what you know now?

    Next Week: Final meetings for the year

  • Quality Assurance Process Improvement – Part 2

    Quality Assurance Process Improvement is the current topic in our Blog Series. We completed a series of 4 on Assessments because at the end of the Assessment process a lot of organizations won’t act on the Assessment results if they don’t have a plan for moving forward. This is particularly true if the Assessment has not been tailored to the particular company in question. A standardized Assessment process generates standard recommendations which may not be applicable. Make sure you detail your expectations at the beginning of the Assessment so you get value from the process and your expenditure of time.

    Last time we looked at What Process Improvement is and now want to address the question of “Why do it”. We stated earlier that we needed to understand the intent of an Assessment and use it going forward. And that is  after-the-fact in terms of the answering the question. Why do an Assessment and continue on with Process Improvement in a Quality Assurance environment in the first place?

    The answer is that Process Improvement Saves Time and Money. We do not carry out any Process Improvement activity without the intent of saving money. The activities we do must have a positive ROI. This, however, is the more difficult question to answer since the positive ROI is not always in the current project. Putting in a defect management process; improving the review process; ensuring early involvement of Quality Assurance personnel in a project benefit the next project but not the current one.

    The objective is to save the overall organisation funds; not just the current project. As such the results of an Assessment and the activities done as part of Process Improvement must be assessed at the corporate level and not at the project level.

    Next Week: Scheduling Test Cycles