Tag: Process Improvement

  • Juggling multiple streams both from a testing POV

    Juggling multiple streams both from a testing POV

    Multiple streams were necessary but the merging was not well organised.

    In the last post we discussed the merger.  It was rocky but eventually everyone had provided their input and we were into Integration and subsequent testing phases.  Obviously, as is usual, some of the problems should have been caught earlier and there was a degree of rework which we could have avoided.  Step’s 1 and 2 did reduce the rework quite a bit.

    Step 3 blend all the streams together and test as one integrated whole.  It took awhile but the testing progressed satisfactorily.

  • Juggling multiple streams from a testing Point of View

    Juggling multiple streams from a testing Point of View

    Multiple streams were necessary but the merging was not well organised.

    For those who are reading regularly, you may recall the two posts “It’s your move” from February.  The piece we did not include there was that the project started with multiple streams that needed to be brought together.  Project plans had been created for each stream although the allotted time for QA and testing seemed to have vanished somewhere.  Clearly some of the streams planned only to test after merging on the assumption that it would not be their problem.  Shift Left or any of the other terms suggesting early testing seemed to be lost on them.  Of course, the streams were not necessarily going to converge at the same time.

    Step 1 was to create expectations for each stream as to what were the conditions for being merged.  There was pushback from some of the people involved but it was a case of now, when it was simpler and cheaper, or later when it was more expensive.

  • TestFormation 2026

    Join us again for TestFormation 2026 – now even bigger and better than ever!

    This year’s theme, “Elevating quality in the age of intelligent autonomy: driving trust, transparency, and governance in AI-powered testing“, promises transformative insights through world-class keynotes, panels, and sessions.

    Keynotes:

    • Badal Bhushan (Distinguished Engineer, Walmart): “Testing Tryst in Autonomous AI Systems” – IC³T for agentic AI authority risks
    • Sheena Yap Chan (WSJ bestselling author): “Visible Confidence in AI-Powered QA” – Ethical leadership frameworks
    • Anu Thothathri (Cognizant QE Leader): “From Testing to Trust” – Modern QE for enterprise AI trust
    • Rogerio Castillo (Caliche Energy Solutions Founder): “Testing in the Age of AI” – TMMi for AI-era quality

    Featured Panel (10 AM):
    Engineering Leadership in the Age of AI-Driven Quality” moderated by Dr. Amanda Fetch (AI Advisor), featuring:

    • Tejas Pandit (MeshDefend.ai Co-Founder/CEO): Builds AI-native systems; ex-Dell global teams
    • Wendy Lally (Engineering Director): Hands-on leader in AI, supercomputing, platform engineering; ex-Intel/Dell
    • Himanshu Pathak (Meta QA Automation Lead): Transforms manual QA with GenAI

    Other sessions cover:
    Autonomous AI agent validation, TMMi cloud security maturity, AI-driven governance controls, data governance clarity, GenAI automation frameworks, model-based testing, agentic QA ecosystems, AI testing maturity models, Zero Trust, safe GenAI guardrails, and tool-enabled AI agents.

    Key Benefits:

    • Free, virtual, global access on March 12
    • Full session recordings available to all registrants – never miss a session!
    • Network via channels + win speaking slots next year, webinars, or newsletter articles!

    Presented by TMMi America Foundation.

    Register now (limited spots!):
    https://tmmiamerica2024.zohobackstage.com/TestFormation2026

  • Quality Coaching

    Quality Coaching

    Quality Coaching focuses on guiding the process improvement required to make a difference in the way a department works.  The intent is not to repeat the mistakes of the previous projects but to look at new ways of working which prevent problems in the first place.

    Quality Coaching provides techniques to individuals enabling them to identify places where a Quality Improvement initiative could provide increased efficiency or reduced costs.

    Read the Case study at https://nvp.ca/wp-content/uploads/2021/10/Case-Study-6-Coaching.pdf for a particular example.

  • Fractional Quality Assurance and Process Improvement

    Fractional Quality Assurance and Process Improvement

    1.  Are your clients requesting auditable proof that your product works?
    2. Does your product work as expected in all cases?
    3. Do your backers need independent proof?
    4. Is your development proceeding without any issues?

    If your answers were Yes, No, Yes and No respectively then you may be looking at a requirement for Quality Assurance and Software Testing solutions.    As a preliminary, consider the following when looking for a solution:

    1. Kickstarting Quality Assurance with minimal impact on your existing processes.
    2. Ongoing regular consulting periodically to keep QA on track and make sure the requirements are being met.
    3. Process Improvement and team empowerment while maintaining the current product trajectory.
    4. Providing enhanced communications and delivery strategies to clients.
  • TASSQ and National Software Testing and Quality Engineering Conference 

    TASSQ and National Software Testing and Quality Engineering Conference 

    Two weeks to register

    TASSQ February 2026 Meeting

    AI in Quality Engineering: Lessons from Large-Scale, Multi-Vendor Delivery

    Presenter: Natalia Moyseyenko Location: Online – Zoom

    When: Tuesday February 24, 2026

    Networking 6:00 – 6:30

    Presentation 6:30 p.m. (until 7:30 p.m.) EST

    Cost: $20.00 (CAD)
    Register at https://tassq.org/events.

    Presentation Abstract: This session shares practical lessons from applying AI in Quality Engineering within a large, multi-vendor enterprise environment.
    It covers where AI delivers real value in shift-left quality, automation, and decision-making—and where governance, data, and people still matter most. The focus is on what actually works at scale, not experiments or hype.

    Speaker Bio:  Natalia Moyseyenko is a Senior Quality Engineering Manager at EPAM Canada and QE Guild Lead, leading Quality Engineering delivery for a Tier-1, multi-brand North American retailer.
    She oversees QE governance and execution across 200+ Quality Engineers embedded in 700+ engineering teams within a complex, multi-vendor ecosystem.
    Natalia leads enterprise-level QA AI transformation with a pragmatic focus on shift-left quality, scalable automation, and decision-grade insights.
    She is a 4× EPAM CEO Award recipient for quality leadership and transformation impact.

    National Software Testing and Quality Engineering Conference 

    The National Software Testing and Quality Engineering Conference is scheduled to take place on May 26, 2026, at the Delta Marriott in Downtown Toronto– 75 Lower Simcoe St, Toronto, ON M5J 3A6, Canada

    This conference is specifically designed for experts in software testing, quality assurance, and quality engineering, and it aims to provide a thrilling new gathering tailored to their needs.

    The field is currently experiencing a revolution with the introduction of AI, making this an ideal moment for professionals to take charge and stay ahead of the curve.

  • If the shoe fits, get another one just like it.

    If the shoe fits, get another one just like it.

    While the quote above may be correct when referring to shoes (as long as the ‘just like it’ includes making it for the other foot), it is certainly not applicable to any software or any assessment. 

    This came up in a different context a couple of days ago with regard to documentation. A solution that was provided for one document was ported to another without consideration of the differences.

    Each project has many unique characteristics (otherwise we could simply grab an existing solution and implement it without concern) so every assessment will be different. Each assessment will return a different answer and nothing other should be assumed. Every situation is not only dependent on the technology being used but also the selected design usually driven by market considerations.

    NVP’s assessments are crafted with that consideration in mind. Every situation is unique and the assessment process allows for that.  There are no preconceived notions. 

  • AI and The Test Lead/Manager

    AI and The Test Lead/Manager

    In late 2025 NVP ran a webinar about the impacts of AI on a Test Lead/Manager.  We discussed both the changes by using AI in testing (test preparation, execution and reporting) and testing an AI infused system (non-deterministic output).  Recently we had a discussion with someone tasked with the latter problem and his solution to most of the problem.  Clearly there was more work to be done.   With the field of AI moving very quickly, some of what we said several months ago needs to be updated.  A new instance of this webinar will be run in March 2026.  Stay tuned for dates, times and registration details.