NVP Software Solutions will be participating in the following two software testing and quality assurance events happening this October in Ontario, Canada. The events are located in Toronto, Kitchener-Waterloo and London in the coming two weeks. Check out the relevant websites for more information and to register. This is a great opportunity to connect with other software testing and quality assurance professionals. We hope to see you there! (more…)
Tag: Quality Control
-
Volume Testing
Volume Testing
Volume testing confirms that any values that may become large over time (such as accumulated counts, logs, and data files) can be accommodated by the program and won’t cause the program to stop working or degrade its operation in any manner.
Risk if not completed – It may not be possible to operate the complete system with all data in place if the volume of expected records is not checked during testing and verified to work correctly and completely. (more…) -
Negative Testing
Negative Testing
In many software testing scenarios, testers can use positive testing and/ or negative testing methods. Positive testing means that the item being tested reacts as expected when the expected input is entered. Negative testing typically means that the system can handle invalid input or unexpected user behaviour. However, if the system responds by rejecting the data and providing an error message, that is what was actually expected, meaning that was positive testing rather than negative testing. (more…)
-
System Boundary Diagram
System Boundary Diagrams sometimes come up in the context of a Use Case and sometimes in the context of Software Testing. Either way they are a useful in the effort expended when determining what to test. While the ‘normal’ System Boundary Diagram shows the boundaries of the system and thus the boundaries of the testing, we try to use it only as a starting point for other diagrams that may also aid in defining the testing effort and scope. (more…)
-
Why Test Training
Test training is something that should be a ‘given’ and not something that a blog series should be devoted to. However, we get a surprising number of questions about test training and plans, that we thought we’d address a few of them here. So why train testers? You may recall that we defined three broad categories a couple of weeks ago in the blog. (more…)
-
Scope of Testing
The Scope of Testing may relate to how much testing we are going to do or it may relate to how much of the system we are planning to test. The amount of testing could be defined as doing multiple phases of testing with differing aims. The amount of the system we are planning to test and actually do test can be measured by a coverage tool. These two definitions are not independent of each other. Regardless of which definition you decide to use, Be Prepared for some arguing.
The (not entirely rhetorical) questions you will get asked include the following:
- You are planning on how much testing? (too much or too little)
- What makes you think that is enough testing? (too little)
- You are planning on testing that? (should be included or should not be included)
- On what did you base that estimate or expectation?
The questions can go on like that for ages. I have one client who does not want any errors in production. Their mantra is everything is in scope and test as much as you can. They are no more successful than any one else and sometimes less so.
Whatever your policy; the following will guide the creation of the scope:
- What must be tested?
- What can be tested (within budget and time constraints)?
- What is the best use of the resources?
Make sure to document the scope up front and get it signed off. That will reduce the problems later on and create a much more harmonious working relationship.
Of course, once we have done the scope, then we need to define the Out of Scope. More arguments are on the way! Incidentally Out of Scope is not Everything not In Scope. It must be specified.
Discussion Questions
- Do you define your Scope of Testing?
- Has it been disputed?
- What would you have done differently based on what you know now?
Next Week: Training
-
Test Training
Training seems like an obvious topic and not one to which a blog or two could be usefully devoted. However we get a surprising number of questions about training and plan to address a few of them here. The first one is what type of training is offered. We define three broad categories here:
- Training related to testing.
- Training related to a particular Test Tool.
- Application related Training.
You only have to read the job advertisements to see the expectations related to open positions. You may see a long list of test tools with which the applicant is to be proficient. You will most likely see some reference to a Test Methodology or SDLC. Most job advertisements finish off with some soft skills.
So how do our three categories relate to day-to-day work?
Taking them in reverse order:
Application related Training
Clearly the more the person knows about the application for which the system was built, the easier it is to understand the risks, define the scope of testing and explain the results to the business. It is also easier to understand the business requirements and expectations.
Training related to a particular Test Tool
This type of training is usually supplied by a vendor and can range from an overview of the test tool allowing one to to use it without in-depth knowledge all the way to becoming a technical expert. The only comment is that every tool has been superceded by something else eventually so every tool or technical process will eventually become redundant.
Training related to testing
This type of training covers the rest of the requirements. It teaches about SDLC, Communication, Risk, Planning, and Testing to name only a few items.
Discussion Questions
- Do you participate in Training for Testing?
- Was it beneficial to the project?
- What would you have done differently based on what you know now?
Next Week: Sources of Information
-
How interactive prototyping can improve QA in the SDLC
It’s often said that quality must be built in, not added on. But when it comes to the Software Development Lifecycle (SDLC), the reverse often happens: defects are identified late on in the Testing Phase, after coding is done. This means bugs are expensive to fix and solutions are found last-minute, putting quality at risk. Early Lifecycle QA, from requirements definition onward, results in a better software development experience and, hopefully, a better end product.
But even when Early Lifecycle QA does happen, it’s not always plain sailing: business requirements documents are often scanty and don’t provide QA professionals with enough information; other stakeholders may be resistant to QA specialists coming in and “telling them their job” at the review stage; some requirements are untestable thanks to lack of clarity. And of course things change throughout any project, it’s a fact. Flexibility is a must.
So how can QA professionals ensure that they can get involved and be effective from the outset of the SDLC and throughout it? Step up interactive prototyping. Using an interactive prototyping tool can facilitate early stage QA and avoid common pain points.
Requirements definition and gathering
QA specialists sometimes receive little information on which to base tests at this stage, thanks to paltry requirements or incomprehensible Business Requirements Documentation (BRD). Additionally, QAs are often sent the documentation too late, meaning there’s no time to set up adequate tests. By gathering, defining and gathering requirements using a prototyping tool – requirements can be imported or created directly in the prototype, and all invited stakeholders (including QAs) can add or comment upon those requirements in real-time. Once you have the baseline of requirements, a System Testing Plan can be finalized.
Interactive requirements and iterative process
Once the BRD and System Requirements Specification are agreed upon, the QA team can set about reviewing requirements in the prototype. Running user test cases with a designated User Proxy – someone who takes on the role of User – will allow QA to be approached from 3 angles: functional, structural and conformance. All QA team members can add to and edit the BRD in the prototype, ensuring that user and system needs are accurately represented at this early stage.
Using a prototyping tool to facilitate this process reduces time and budget concerns for project managers, which means they are more likely to agree to incorporating QA teams early on.
Design and QA
With a version history of requirements accessible within the prototype, the design team has a clear map to work off. They can build an interactive prototype based on the validated requirements, linking each feature to its relevant requirement and thereby facilitating QA testing. Once the design team has produced a high fidelity prototype, activities such as verifying system architecture and carrying out system audits can be done on the prototype. Finding and fixing bugs through prototype testing is a lot cheaper than fixing them in the code.
Coding and Deployment
Later SDLC stages can now go ahead, with the QA team carrying out coding-related Quality Assurance activities such as verifying implementation of top requirements, and checking the quality of code with Product Quality Analyzer tools.
Key Success Markers
Early Lifecycle Quality Assurance requires collaboration between teams and a shared vision, factors supported by the inclusion of interactive prototyping in the SDLC. By prioritizing Early Lifecycle QA rework and costs are reduced, QA input is incorporated at every stage of the project, and time to market is optimized.
Justinmind is a prototyping tool for web and mobile applications that allows you to visualize your software solution before starting development