AI might be the most overused buzzword of the decade, but in software testing, it’s unlocking real value. From model-driven testing that improves collaboration and coverage, to agentic AI systems that can observe, reason, and act, today’s AI tools are reshaping how we think about quality, speed, and risk.
Join Daniel Howard (Senior Analyst, Bloor Research) and Jonathon Wright (Chief AI Officer, Keysight) for an honest discussion on how AI is evolving—not replacing—software quality engineering. Learn how to bring humans and machines together for smarter, faster, more accountable testing.
In This Session, You’ll Learn:
• How automation and model-driven testing lower the barrier to quality, increase collaboration, and reduce risk
• Why generative and agentic AI should support—not replace—human testers, and where they can add measurable value
• What metrics, frameworks, and governance practices matter most to drive continuous improvement and stay compliant
Not in the Americas? Our European friendly session is also available — join here.