Bad Data in Software Delivery | Why Didn't You Test That?
Why Didn't You Test That? The Curiosity Software Podcast featuring Huw Price and Rich Jordan! In this episode, Rich Jordan & Huw Price explore the concept of the data gambler and sceptic, the tension of using commercially live data in software delivery, and why ultimately it's all about the data as without test data there is no testing!
Without Test Data There is No Testing
In this episode of Why Didn't You Test That?, Rich and Huw move the conversion on from GDPR compliance to bad data in software delivery. Together they explore the concept of the data gambler and sceptic, the tension of using commercially live data in software delivery, and why ultimately it's all about the data as without test data there is no testing!
“You’re building up technical debt, mitigating the need to use live production data, which comes with negative outcomes. The real headspin is that you need to address the unknowns. At best, using live data to mitigate these unknowns can only be done once.”
- Rich Jordan, Enterprise Solutions Architect, Curiosity Software
“If you understand how to test, it’s all about the data - the decision gates - so you shouldn’t be worrying about production specific data. How do we break this over reliance on use of production data?”
- Huw Price, Managing Director, Curiosity Software
-
Shownotes
00:29 - Beyond the feature factory: the gambler and the sceptic
01:30 - The tension of using commercially live data: is it more than a side gig in an organisation’s priority?
03:27 - Control theatre: are audit and security teams detached within the delivery of software?
04:20 - Realising the cost on business when legislation isn’t pushed down and integrated at an IT level.
05:30 - Impact of lag in pushing GDPR guidance.
06:18 - Learning from the EUROStar Conference: Huw Price’s takeaway is it is possible to ‘eliminate production/live data from testing’
08:03 - The Swindon roundabout analogy: it’s all about the data and decision gates, rendering production/live data irrelevant to testing! Can we change the narrative? Testing/buzzword compliant or good observable whitebox testing? Cause and effect modelling. Just prove your requirement has worked with the minimal amount of tests. SUT, Coverage, Integration and Unit, combinatorial. Do you have the right result for the right reason.
10:21 - Production data increases technical debt: How to break people’s mantra that production/live data in test allows more than just the surface of the SUT? When in fact it’s a low bar and introduces chaos into a system, increasing technical debt.
12:28 - Design through Data Security to shore up predictability and repetition to reduce disorder in a system. Setting the bedrock for dynamic automation: Data Analysis can only but help understand the impact of duplication of tests. We consider the headspin between balancing conduct risk with regulatory compliance?
14:47 - Do critical changes just once? This reduces the amount of logic gates with each test, so test bloat. Spin up just the right amount of Dynamic Automation.
16:45 - Critical thinking around test cases: Start with manual, then go towards negative testing, for instance coping with nulls.
18:00 - What does your customer look like in terms of data? Quality of data is time-dependent. So, continually improve the set of automation tests on validation routines and also think in terms of continuous analysis to data structure standards and evolving UI.
21:00 - Production data only touches 20% of code: Shoring up a dev release with power users, the QA release and the pre-prod release by simulating activities. Days of working next to developers and users in Mainframe days.
24:00 - Federated vs centralized data: Organisations make the business user the power user earlier into the software delivery. But the outcome is limited to a blackbox understanding.
25:00 - Don't replace monolith for monolith: Taking an example of Team Ferret, who, beyond being an analysis team interacting with a black box towards making whitebox components.
26:00 - What defined synthetic data? Using API over Unit testing. The role of a data generation AI. Moving away from the buzzy gold copy database.
28:00 - Avoiding out-of-date production data: Using modelled requirements to improve coverage and avoid out-of-date production data, deciding the business decisions – characteristics - and data you care about. Get the TDA to keep it up to date. Model out variations and add in virtual databases, joined into the DevOps methodology.
29:15 - Upping the bar between functional and performance testing & outro: Differences and synergies what’s being asked and how the profile of the customer is actually understood. Upping the bar between functional (one-to-one testing of cases and data) and performance (what would production do) testing.
-
Full Episode Description
In this episode of Why Didn't You Test That?, Rich and Huw move the conversion on from GDPR compliance to bad data in software delivery. Together they explore the concept of the data gambler and sceptic, the tension of using commercially live data in software delivery, and why ultimately it's all about the data as without test data there is no testing!
Is data just a side gig for organisations? Can we change the narrative? By Implementing Design through Data Security to shore up predictability and repetition, we can reduce disorder in a system and set the bedrock for dynamic automation and a modern test data management solution and synthetic test data.
But what actually defines synthetic data, and how can organisations implement it? Responding to this, Huw and Rich touch on using API testing over unit testing and the role of data generation AI tools to move away from the buzzy gold copy database.
What can you do today? Design to do critical changes only once, then discard them, reducing the amount of logic gates within each test, therefore, reducing test bloat. Just spin up just the right amount of Dynamic Automation, ensuring to be critical around test cases, starting with manual, then going towards negative testing.
Finally, realise the cost to the business. To what extent is the delivery team working like a feature factory, and is it aligned or detached with the audit team? Maybe start weighing in on this as a compliance versus conduct risk situation in the organisation. The need then is to align the synergies between what’s being asked about the customer data and how the data of the customer is actually understood.
Ultimately, you’ll be upping the bar on both functional and performance testing. GDPR guidance is there to be interpreted by the individual organisation, and so at the moment is this just a type of control theatre mitigating a real lack of understanding of the unknowns in a system under test? Real understanding of the system begins with understanding your data!
The Curiosity Software Podcast featuring Huw Price and Rich Jordan! Together, they share their insight and expertise in driving software design and development in test. Learn how you can improve your journey to quality software delivery, by considering how much do you really understand about your systems, and when things inevitably go wrong, why didn’t you test that?