Tag: Quality Assurance

  • Validation

    Quality Assurance has a number of subdivisions; one of those being Quality Control (sometimes referred to as Software Testing). We have been discussing Verification for the last four weeks. See our blog. Today we start into Validation

    We define Validation as being the active part of testing where completed, compiled, and promoted code is being run using data and generating results while Verification is the static part of testing. Note that promoted includes Unit testing on the developer’s workstation. So the promotion may not be a very formal one nor may the compile amount to a lot.

    Validation includes All aspects of active testing including the major divisions of White Box and Black Box (along with all the various Grey boxes in between). The fact that the code has to be completed means that it is difficult to launch into Validation before a lot of work has already been done. This is what adds to the cost of completing this type of testing and ensures that any defects that are found tend to be expensive to correct and retest. Hence our emphasis on Verification as being more generally cost effective.

    A lot of work has been completed to enable Validation to proceed smoothly and effectively. Many testers start their careers in this area and then move into various aspects of the process as they become more specialized. Some go towards Automation, others move into specialised areas like Performance or Security testing.

    While the concept is simple, the identification of the most cost-effective place to use Validation techniques is not always easy. There is no point in completing Validation without some end in view and a positive ROI. Give us a call at 416-927-0960 or visit our website at NVP.ca to find out where you would benefit from the implementation of Validation techniques in your organisation.

  • Quality Assurance Supports Verification

    Quality Assurance supports verification for a number of reasons not least of which is the potential for cost savings for you! The key is to know where to apply the appropriate level of verification. It is quite possible to verify too many items and lose the benefits of reduced rework since it is all used up in the actual verification process. It is equally possible to verify too few times and lose the momentum and the benefit of corporate knowledge.

    Some projects have problems identifying the correct place to implement verification techniques and either wait until it is too late in the project or start too intensely too early. The last mistake that some organizations make is matching the correct verification technique to their level of process maturity. There are prerequisites that are needed to make the verification process effective and without those in place, the positive impact is limited. It is also necessary to have the correct stakeholders involved so that they get their input at the correct point and with sufficient weight.
    For all of the above reasons, it is critical that an assessment be carried out on the existing processes to determine the following:

    • The appropriate place to use verification techniques in your projects.
    • The payback realised from the implementation of verification techniques.
    • The level of Verification carried out for each project artifact.
    • The relevant stakeholders who gain from the process and can see the benefits.
    • The use of the correct techniques based on SDLC maturity.

    Once all of the above are known; then the prerequisites can be put in place and an effective verification process implemented for your benefit. NVP Software Solutions completes that assessment and provides Recommendations and a Roadmap to get you there. You realise instant benefits in reduced rework and reduced cost.

  • Verification Applicability in Software Testing

    Last week we introduced the concept of verification and defined it. This week we want to discuss verification applicability. Many people agree that verification should be done but come to a halt when deciding where they can apply it and the degree of formality that should be used. Verification really covers everything that can be done in the project in terms of testing excluding the actual Quality Control or active testing. In other words we can verify almost everything.

    • Business Objectives – can be verified for feasibility, correctness, completeness and how accurately they reflect what the business needs
    • Requirements – can be verified for correctness, completeness, feasibility, testability, and how well they reflect the underlying business objectives
    • Design – can be verified fir correctness, completeness, feasibility, testability, and how well they represent the defining requirements
    • Code – can be verified for adherence to standards, readability , level of comments, structure, whether it reflects the design, maintainability, portability, testability
    • Test Plans – can be verified for readability, correctness, applicability to the testing at hand, ability to execute what is required, scope
    • Test Cases – can be verified for coverage, maintainability, readability, ability to execute, maintainability, portability
    • Test Data – can be verified for coverage, security (anonymity), maintainability, portability, retention
    • Test Reports – can be verified for usefullness, readability, completeness

    The above are only a partial list of the major items that can be verified. The list can include any artifact that may be produced during the project like Minutes of Meetings, Online Conversations, transcribed verbal conversations, and emails. If it can be read it can be verified!

    The issue is to find the most cost-effective places in your development process to implement verification techniques. There is no point in completing Verification processes without some end in view and a positive ROI. Give us a call at 416-927-0960 or visit our website at NVP.ca to find out where you would benefit from the implementation of Verification techniques in your organisation.

  • Examples of Verification in Software Testing

    In our last blog post, we included a list of Examples of Verification without going into too much detail. In this blog we will take a couple of those verification list items and expand on them. The examples of verification that most tend to impact software testers with the best return are Test Plans, Test Cases, Test Data and Test Results.

    Test Plans

    Applying verification techniques to a Test Plan can save hours of effort. The three methods we can use to review a test plan are:

      • Walkthrough – in this method, the author of the Test Plan ‘walks’ one or more of their peers through the test plan, explaining the sections and what is meant by the content. The role of the peers is to find problems, omissions, extra content, incomplete items, inconsistent items and to add things that they might feel were missed. The intent is to have a better document more accurately reflecting the needs of the project stakeholders. A new version is issued after all the errors are corrected and it is used going forward.
      • Document Review – In this method, a review committee (preferably selected from the interested stakeholders) reviews the documents and records the same items as listed above for the walkthrough. Once they are finished their individual reviews, they come together to create a final list of problems which need to be corrected. A new version is issued after all the errors are corrected and it is used going forward.
      • Inspections – In this method, formalized roles are defined and assigned and a procedure is followed to ensure proper inspection of the document. The intent and result is the same as the previous two methods. The only difference is the degree of formality.

    What’s the point in all of this? The payback from reviewing the plan (using any of the methods above, more than pays for itself in terms of less errors going forwrd, less work to be undone, redone and redone (again).

    If you know your test plan is poor, or you’re not even sure where to start give us a call at 416-927-0960 or visit our website at NVP.ca to find out where you would benefit from the implementation of Verification techniques in your organisation.

  • Verification

    Quality Assurance has a number of subdivisions; one of those being Quality Control (sometimes referred to as Software Testing). One of the, largely underrated, aspects of Quality Control is Verification.
    We define Verification as being the static part of testing while Validation is the active part of testing where completed, compiled, and promoted code is being run using data and generating results. Verification refers to aspects of testing that do not involve actually executing the code and includes: Inspections, Walk-throughs and Reviews.

    Since we do not need code, it is possible to launch into Verification much earlier in the process without waiting for code to be completed. As someone once pointed out, the first time you pick up some artifact from the project whether it be a document, minutes of meetings, a communication or even a conversation and start to look at it, you are performing your very first test of that system.

    The three major subdivisions mentioned above differ in their degree of formality and where they may be applied. In general Inspections are the most formal of the processes and have a well developed methodology. Reviews can be formal or informal and can be done at various points depending on how necessary they are and what is the intent. Walkthroughs can be very informal and used with minimal effort and training.

    The major impact of this type of testing is to discover issues, misconceptions, lack of clarity, incompleteness and misinterpretations as soon as possible.
    While the concept is simple, the identification of the most cost-effective place to use Verification techniques is not always easy. There is no point in completing Verification processes without some end in view and a positive ROI. Give us a call at 416-927-0960 or visit our website at NVP.ca to find out where you would benefit from the implementation of Verification techniques in your organisation.

  • Factory Testing

    Factory Testing is one the terms that leads to a lot of acrimony in the industry. There are people who believe it and will evangelize for it and others who claim it stifles every innovative thought ever made.

    Some of the people who believe in it fervently are those who have exacting expectations from their clients. They need to be able to tell them with certainty what tests were executed, when they were executed and what the results from those tests were. At the end of test, there is no question as to what has been done and the decision about whether to proceed is easy.
    Those who believe that this process stifles creativity think that simply following fixed scripts without thought reduces the role of a tester to that of a checker and not a tester and the work might as well be done by someone without a testing background.

    There is room for both views but they need to be applied at different points of the Software Development Life Cycle.

    Creativity needs to applied (in both views) at the time of Test Case or Test Objective Creation. We certainly need creativity and original thought in determining what to test and how to test it. Once that has been done and the final client has agreed that the coverage is sufficient and then the paths diverge a little, but not as much as some people think.

    In the Factory Testing method, the testers (or checkers) must finish all the tests that are approved in the way they have been written. This does not preclude them discovering other ways of testing and new tests that can and should be completed. it just means that they may not necessarily complete them immediately for the current cycle or release. They certainly should not be lost and can be used to improve the coverage and process next cycle or release or even in the current period if they are important enough.

    In the other methodologies, we can still use the approved tests as a starting point and then start to add and expand. However, we do need to know the best way of augmenting tests while ensuring coverage. Quality Assurances focus on statistical analysis will allow the calculation of those figures and make that decision.

  • Early vs Late Testing

    Early versus Late Testing within a project is an ongoing debate within the software testing industry. With the Waterfall methodology, active testing was not possible until late in the development cycle when the project was ‘thrown over the wall’ to testing and the testers started what some called the ‘final exam’. A long test-fix cycle often ensued. Many newer methodologies promote earlier involvement and many of the recent ones promote continuous testing of multiple builds using test automation. With a broad definition of testing that encompasses more than just direct Validation it is quite possible to start some form of testing very early in the project. However, some feel that we have obtained as many benefits as we possibly can from early testing and we need to bring back an emphasis on late cycle testing when the system is fully developed in order to prove everything works together.

    This debate is missing the point of the testing process. The only question is when is it appropriate to do testing and this can be answered at the time of planning and with the help of Quality Assurance. Testing has to be completed when it is appropriate and when the necessary items are in place to support that endeavor. During the test planning process, it is necessary to identify what testing needs to be completed. Once we know what has to be completed, then we can identify where in the process it can be completed. Quality Assurance processes allow that identification to take place easily. With some overarching process in place that indicate what the organisation requires in the way of Risk reduction, it is easy to identify what testing is required and then map that to the appropriate place in the Software Development Life Cycle.

    While Early or Late testing may not always be appropriate, early planning is always necessary and Quality Assurance guides that planning.

     

  • DevOps and Quality Assurance

     

    A recent article in Techwell highlighted two interesting points for DevOps and Quality Assurance.

    The main problems seem to be included in the following paragraph from the article:

    ‘One problem with this approach is that the business sometimes cannot handle changes so rapidly. The other problem is that too often, the QA and testing team cannot keep up with the need to verify and validate changes. DevOps may be able to deliver changes to production, but the rest of the team may not be able to handle continuous deployments. Improving the QA and testing function can help deal with these challenges and enable organizations to effectively handle the impact of continuous delivery and continuous deployment.’

    The first question is in the initial sentence. Can the business handle the rapid changes? The better question to ask is whether the business actually wants the rapid changes in the first place. If they express the need for the changes and agree to the timetable for their delivery then they would be expected to put the resources and allocate the time to learn and utlise the changes as they arrive. An assessment of the ability of the business to absorb the rapid pace of change and a subsequent prioritisation of the order of changes would reduce this problem substantially.

    If the business expresses the desire for all the changes and states that the delivery schedule is necessary then we come to the second problem – the ability of Quality Assurance and Software Testing to deliver. As a recent comment on LinkedIn put it “Are you sure it is testing you are doing” or it might be better worded as “Are you sure it is testing you should be doing?”. It might be better to check items earlier and determine what is necessary rather than assuming that Testing is all that is required. Quality Assurance carries out that assessment.