5 Reasons to Model During QA, Part 4/5: Faster QA Reaction Times
Welcome to part 4/5 of 5 Reasons to Model During QA! If you have missed any previous instalments, use the following links to see how modelling can:
Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality!
Product Overview | Solutions |
Success Stories | Integrations |
Book a Demo | Release Notes |
Free Trial | Brochure |
Pricing |
Our innovative solutions help you deliver quality software earlier, and at less cost!
AI Accelerated Quality Scalable AI accelerated test creation for improved quality and faster software delivery.
Test Case Design Generate the smallest set of test cases needed to test complex systems.
Data Subsetting & Cloning Extract the smallest data sets needed for referential integrity and coverage.
API Test Automation Make complex API testing simple, using a visual approach to generate rigorous API tests.
Synthetic Data Generation Generate complete and compliant synthetic data on-demand for every scenario.
Data Allocation Automatically find and make data for every possible test, testing continuously and in parallel.
Requirements Modelling Model complex systems and requirements as complete flowcharts in-sprint.
Data Masking Identify and mask sensitive information across databases and files.
Legacy TDM Replacement Move to a modern test data solution with cutting-edge capabilities.
See how we empower customer success, watch our latest webinars, read our newest eBooks and more.
Events Join the Curiosity team in person or virtually at our upcoming events and conferences.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Help & Support Find a solution, request expert support and contact Curiosity.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Documentation Get started with the Curiosity Platform, discover our learning portal and find solutions.
Integrations Explore Modeller's wide range of connections and integrations.
Curiosity are your partners for designing and building complex systems in short sprints!
Meet Our Team Meet our team of world leading experts in software quality and test data.
Our History Explore Curiosity's long history of creating market-defining solutions and success.
Our Mission Discover how we aim to revolutionize the quality and speed of software delivery.
Our Partners Learn about our partners and how we can help you solve your software delivery challenges.
Careers Join our growing team of industry veterans, experts, innovators and specialists.
Press Releases Read the latest Curiosity news and company updates.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Contact Us Get in touch with a Curiosity expert or leave us a message.
4 min read
Thomas Pryce 24 July 2019 10:58:21 BST
Welcome to part 3/5 of 5 Reasons to Model During QA!
Part one of this series discussed how modelling enables “shift left” QA, eradicating potentially costly defects as they arise during the design phase.
Part two then shifted focus “right”, to testing code built from the requirements. It considered the significant time gains achieved by generating test cases, test scripts and test data automatically.
Model-Based Testing thereby makes it possible to test complex applications sufficiently, even within short iterations. Today’s article continues this theme, focusing in particular on the test coverage gains that accompany the increased testing efficiency.
Manual test creation is not only slow and repetitive, but leads to the undesirable combination of over-testing and under-testing. Overall test coverage remains low, while certain logic is wastefully tested repeatedly. QA in turn does not mitigate the risk of damaging defects sufficiently, leaving a system exposed to costly bugs.
The sheer complexity of modern applications means that creating test cases manually and unsystematically cannot reach the coverage required for true quality assurance. Multi-tiered systems have a multitude of interrelated components, as demonstrated in the following dependency map:
A dependency map created from around 100,000 lines of C# code. This map only shows the relationships between the components in the system. The picture becomes vastly more complex once the intertwined logic contained in each component is factored in.
The above dependency map reflects an application with around 100,000 lines of code, and modern applications will typically contain millions of possible paths through their logic. This is more than any one person could comprehend in their head, and 2018 research suggests that 66% of organisations struggle “merely deciding what to test”.[1]
Manual test creation typically therefore undertests complex systems severely. The tests tend to pick off the most obvious, “happy” path scenarios first, testing these expected behaviours repeatedly. Negative scenarios and unexpected results go untested, when it is these outliers that can cause the most severe defects in production.
The result is resource-intensive, wasteful over-testing that nonetheless leaves systems exposed to bugs. Low test coverage persists even with test execution automation, as executing tests automatically does nothing to improve the quality of the test suite itself. Instead, a measurable and systematic approach to identifying what to test is needed, along with an efficient and systematic method for creating those tests.
Model-Based Testing enable such a systematic and measurable approach to test case design. It harnesses the power of computer processing and the reliability of mathematic to identify all the tests contained in massively complex systems. This is possible even when the logic is greater than any human mind could comprehend.
Part two of this series set out how mathematically precise flowcharts enables the automated identification of every path through the models. Each logical journey is equivalent to a test case, and automated algorithms can therefore identify every test in the flowchart. Using Test Modeller, subflows can additionally be used to embed lower level components within master models, rapidly creating comprehensive test cases for complex systems:
Subflows integrate lower level functionality into master flowcharts, enabling rapid and
reliable test case generation for complex systems.
Generating tests from models introduces measurability to test design. Test coverage is proportional to the logic contained in model, and tests can be generated touch all the logic contained in the model at least once.
Multiple algorithms might be used, for instance testing every logical step (node) in the model, or covering every connecting “edge” (arrow) between the blocks at least once.
These techniques generate the smallest number of tests needed to cover the model, reducing testing time while still covering every positive and negative scenario. Testing in turn avoids wasteful over-testing, while still testing every distinct combination of logic and data once.
Generating tests from logical models further maximises observability, reducing the likelihood of false positives and of bugs masking bugs. Testers can instead know that their tests got the right result, for the right reason, providing true assurance of the quality of a system.
It is rarely feasible to execute every test case associated with a complex system in a single iteration, and exhaustive testing should instead be reserved for the most high-risk, high visibility functionality. Fortunately, Model-Based Testing also enables reliable risk-based testing, focusing test creation on critical functionality.
Test Modeller makes reliable, risk-based test design possible. Several coverage profiles can be created for a given model, setting requisite coverage levels for tagged features. Automated test generation will then create the smallest set of tests needed to satisfy the coverage levels by feature, while testing the untagged logic in a model to a specified coverage level:
A coverage profile created for a login screen focuses on testing negative scenarios.
The generated test cases will target scenarios where invalid data is entered into the screen.
“Happy path” scenarios will be ignored, while logic contained in the surrounding model will
be tested to a medium level of coverage.
This granular approach to test coverage enables QA teams to focus testing on high-risk functionality. Testing might for instance focus on the negative paths that can cause the most severe defects in production. Coverage profiles might also be created for targeted regression, focusing on features that failed in the last test run, or on features that have been recently updated.
Model-Based Testing therefore provides the flexibility to dynamically explode test coverage, focusing in detail on given parts of the system. Combined with the improved efficiency of automated test creation, QA can test more functionality in short iterations while also mitigating against the negative risk of defects as much as possible.
This is particularly true after a change has been made to a complex and vast system, the subject of the next article in this series.
[1] Vanson Bourne and Panaya (2018), survey of over 300 IT decision makers in the UK and US. Cited from Islam Soliman (2018), “AI & automation vs humans: the future of software testing?”, DevOpsOnline (16-11-18). Retrieved from http://www.devopsonline.co.uk/14159-2-ai-and-automation-vs-human-testers/ on 05-12-18.
[Image: Pixabay]
Welcome to part 4/5 of 5 Reasons to Model During QA! If you have missed any previous instalments, use the following links to see how modelling can:
Model-Based Testing (MBT) itself is not new, but Model-Based Test Automation is experiencing a resurgence in adoption. Model-Based Testing is the...
Welcome to the final instalment of 5 Reasons to Model During QA! If you have missed any of the previous four articles, jump back in to find out how...
Welcome to part 2/5 of 5 Reasons to Model During QA! Part one of this series discussed how formal modelling enables “shift left” QA. It discussed how...
Curiosity often discuss barriers to “in-sprint testing”, focusing on techniques for reliably releasing fast-changing systems. These solutions...
Despite increasing investment in test automation, many organisations today are yet to overcome the barrier to successful automated testing. In fact,...
Behaviour-Driven Development (BDD) emerged in 2006 [1], partly in response to perennial test and development painpoints lingering in spite of “agile”...
Application development and testing has been revolutionised in the past several years with artifact and package repositories, enabling delivery of...
Continuous Integration (CI) and Continuous Delivery or Continuous Deployment (CD) pipelines have been largely adopted across the software development...