Hosted by Curiosity, Katalon, OctoPerf, WireMock and XRay
In this talk, we explore the tools used in AI Research to understand and test AIs themselves, as well as systems that integrate AI, and Learning Pipelines - and how we can leverage them!
AI-powered. End-to-end. Your complete test data management platform.
Explore Curiosity's collection of webinars, podcasts, blogs and success stories, covering everything from visual modelling to artificial intelligence and test data management.
Deliver superior test data and overcome the challenges of complexity, legacy, scale, and regulation with Curiosity Software.
In this talk, we explore the tools used in AI Research to understand and test AIs themselves, as well as systems that integrate AI, and Learning Pipelines - and how we can leverage them!
Artificial Intelligence has become an important tool and topic for accelerating testing and quality efforts. However, as more of the systems and applications we are responsible for integrating our systems with AI tools, how do we ensure the quality of AI infused into them? How do we expand our testing and quality practices to cover AIs and the associated applications themselves?
Integrating smart tools we don’t fully control is a challenge, how can we build our applications to be as resilient as possible in the face of this challenge?
Fuzzing, adversarial testing, GANs, simulated data & statistical tests are all techniques we will consider. We will also talk about how we can maximize consistency when we ultimately don’t control the quality & availability of the LLMs directly. The way we build applications is changing, it’s time to be ready for how we ensure their quality, too!
The classical problem with AI is that we don’t necessarily have full knowledge of the expected results, often they are our best answer, so evaluating them can be challenging, and they’re certainly prone to hallucinations and other problems like glitch tokens. Even more urgently, integrating external LLMs has consistency challenges all of its own.
There are things we can do though!
Ben Johnson-Ward, VP Solutions Engineer at Curiosity, has spent the past 12 years pioneering testing tools and techniques for global banks, retailers, insurance companies, telcos and beyond. He has occupied many of the roles associated with “quality”, including developer, product owner, product manager, automation engineer and tester. Ben has often gravitated towards model-based testing and test data. He has worked as a product manager and consultant of tools that have been used to create and optimize tests for many different technologies and projects. Ben has focused on the use of Generative AI for testing, serving as a product manager and services engineer for multiple tools. He has explored the fringe possibilities and disruptive capabilities of AI, alongside techniques which are emerging as enterprise-ready.
Watch more webinars, or talk with an expert to learn how you can embed quality test data throughout your software delivery.
Register for Curiosity's upcoming live webinars, or watch past webinars on demand to learn about...
Read more about Explore Curiosity's webinar collection See moreDiscover how on-demand generation replaced 5-day provisioning with instant and unlimited data.
Read more about Self-service data generation at a large financial institution Read the full storySpeak with a Curiosity expert to learn how you can advance your software quality and productivity.
Read more about Meet with a Curiosity expert Book now