How to gain complete visibility of your test data: End-to-end discovery, monitoring and AI-insights | Live on the 16th of April

Register today
Webinar part of the Quality Horizon 2024

Testing AI, AI-driven systems, and ML pipelines

Hosted by Curiosity, Katalon, OctoPerf, WireMock and XRay

In this talk, we explore the tools used in AI Research to understand and test AIs themselves, as well as systems that integrate AI, and Learning Pipelines - and how we can leverage them!

Artificial intelligence systems need testing!

Artificial Intelligence has become an important tool and topic for accelerating testing and quality efforts. However, as more of the systems and applications we are responsible for integrating our systems with AI tools, how do we ensure the quality of AI infused into them? How do we expand our testing and quality practices to cover AIs and the associated applications themselves?

Integrating smart tools we don’t fully control is a challenge, how can we build our applications to be as resilient as possible in the face of this challenge?

Fuzzing, adversarial testing, GANs, simulated data & statistical tests are all techniques we will consider. We will also talk about how we can maximize consistency when we ultimately don’t control the quality & availability of the LLMs directly. The way we build applications is changing, it’s time to be ready for how we ensure their quality, too!

The classical problem with AI is that we don’t necessarily have full knowledge of the expected results, often they are our best answer, so evaluating them can be challenging, and they’re certainly prone to hallucinations and other problems like glitch tokens. Even more urgently, integrating external LLMs has consistency challenges all of its own.

There are things we can do though!

Meet Curiosity's speaker

Ben Johnson-Ward - Curiosity Software

Ben Johnson-Ward - VP of Solutions Engineering

Ben Johnson-Ward, VP Solutions Engineer at Curiosity, has spent the past 12 years pioneering testing tools and techniques for global banks, retailers, insurance companies, telcos and beyond. He has occupied many of the roles associated with “quality”, including developer, product owner, product manager, automation engineer and tester. Ben has often gravitated towards model-based testing and test data. He has worked as a product manager and consultant of tools that have been used to create and optimize tests for many different technologies and projects. Ben has focused on the use of Generative AI for testing, serving as a product manager and services engineer for multiple tools. He has explored the fringe possibilities and disruptive capabilities of AI, alongside techniques which are emerging as enterprise-ready. 

Your next step to better test data management

Watch more webinars, or talk with an expert to learn how you can embed quality test data throughout your software delivery.

Full data coverage, on demand access

Streamline your data delivery with our AI-powered Enterprise Test Data® platform.

Book a meeting

Curiosity Software Platform Overview Footer Image Curiosity Software Platform Overview Footer Image