Introducing “Functional Performance Testing” Part 1
This is Part 1/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier...
Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality!
Product Overview | Solutions |
Success Stories | Integrations |
Book a Demo | Release Notes |
Free Trial | Brochure |
Pricing |
Our innovative solutions help you deliver quality software earlier, and at less cost!
AI Accelerated Quality Scalable AI accelerated test creation for improved quality and faster software delivery.
Test Case Design Generate the smallest set of test cases needed to test complex systems.
Data Subsetting & Cloning Extract the smallest data sets needed for referential integrity and coverage.
API Test Automation Make complex API testing simple, using a visual approach to generate rigorous API tests.
Synthetic Data Generation Generate complete and compliant synthetic data on-demand for every scenario.
Data Allocation Automatically find and make data for every possible test, testing continuously and in parallel.
Requirements Modelling Model complex systems and requirements as complete flowcharts in-sprint.
Data Masking Identify and mask sensitive information across databases and files.
Legacy TDM Replacement Move to a modern test data solution with cutting-edge capabilities.
See how we empower customer success, watch our latest webinars, read our newest eBooks and more.
Events Join the Curiosity team in person or virtually at our upcoming events and conferences.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Help & Support Find a solution, request expert support and contact Curiosity.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Documentation Get started with the Curiosity Platform, discover our learning portal and find solutions.
Integrations Explore Modeller's wide range of connections and integrations.
Curiosity are your partners for designing and building complex systems in short sprints!
Meet Our Team Meet our team of world leading experts in software quality and test data.
Our History Explore Curiosity's long history of creating market-defining solutions and success.
Our Mission Discover how we aim to revolutionize the quality and speed of software delivery.
Our Partners Learn about our partners and how we can help you solve your software delivery challenges.
Careers Join our growing team of industry veterans, experts, innovators and specialists.
Press Releases Read the latest Curiosity news and company updates.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Contact Us Get in touch with a Curiosity expert or leave us a message.
Organisations today have long understood the need to automate test execution, and 90% believe that automated testing allows testers to perform their tests quicker.[1] Yet, QA teams are struggling to achieve sufficiently high rates of automated test execution. Slow and overly manual testing still abounds.
In 2018, 61% of organisations had automation rates lower than 50%.[2] This article considers five reasons for these low rates of functional test automation, setting out some of the most common pitfalls to watch out for when adopting a test automation strategy.
Watch our on-demand webinar to discover how Model-Based Testing enables enterprise-wide automation adoption!
Slow and repetitious test creation is the primary source of test automation bottlenecks. This is because tests must exercise every single combination of user activity and data, and writing test scripts is therefore repetitious, slow, and labour-intensive. There are many overlapping test steps that need to be written to provide full coverage, and numerous new tests are required at the start of each new iteration. Automation engineers are therefore always trying to play ‘catch up’ if they write scripts by hand, and the time spent scripting tests often outweighs the time saved during execution.
Figure 1: A scripted page object in a test automation framework. How many of these can an engineer write manually per day?
Automated testing often has unacceptably low test coverage, leaving systems vulnerable to damaging bugs.
If automated tests are derived manually from imprecise and incomplete system requirements, testers cannot definitively say how ‘good’ their tests are, nor when they have tested ‘enough’. The tests are not measurable against requirements making it impossible to know how much of a system is being covered by automated tests.
Manual test creation therefore leads to test coverage as low as 10-20%. The founder of the first test automation framework, Dorothy Graham, questioned automated tests’ ability to improve test coverage, saying: ‘It is the quality of the tests that determines whether or not bugs are found, and this has very little, if anything, to do with automation’.[3]
Thirdly, bad test data can undermine the speed and rigour of test automation frameworks.
For rigorous testing, test data must be available, comprehensive, and accurate. This is rarely the case for organisations today, and 53% of the respondents to the World Quality Report in 2018 said they had a lack of appropriate test data.[4]
Test data for most companies still means large, masked copies of production data drawn from past user activity. These copies cover only a small range of possible tests, while testers have to then search through these large ‘dumps’ of production data to find suitable data sets. This is highly time-consuming and potentially pointless, as suitable data combinations do not always exist.
Figure 2: Inaccurate and low variety test data undermines the stability and rigour of
test automation frameworks.
The manual data hunt is also error-prone, leading to bad data that destabilises automation. Test failures stemming from invalid data meanwhile flag defects that do not exist in the code, leading developers on a wild goose chase.
Test Maintenance is arguably the greatest barrier to successful test automation adoption.
Manually created tests are extremely brittle to system changes. Every time the system is updated, there might be thousands of regression tests that might have been rendered invalid. Test engineers must then identify the impact of changes made to components across complex systems, checking which tests have been impacted by the change. This is a vastly complex and error-prone process.
Figure 3: Impossible test maintenance – arguably the greatest barrier to successful
test automation.
Automation of test maintenance is therefore a must for rigorous testing in-sprint. However, automating maintenance is not possible when tests have been manually derived from the system requirements, as there is no formal link between the latest system designs and the tests derived from them.
In the image above, a change request is submitted and added to a bag of unconnected requirements that are not formally mapped. Engineers then must identify which tests have been impacted by each change request and update them accordingly. However, the system has more moving parts than any one person can comprehend in their heads. Test maintenance is therefore both time-consuming and error-prone.
Today, testers without deep coding skills tend to outnumber engineers with automation experience. This results in a core group of engineers being tasked with the whole organisation’s automation, leading to a small team being severely over-worked.
These 5 challenges present a significant barrier to achieving sufficient levels of test automation. Unless they are overcome, the potential ROI of model-based test automation will not be realised by many companies.
Here at Curiosity Software Ireland we have developed The Test Modeller, an automated test software that has conquered the challenges introduced in this blog. To find out how we have overcome these challenges, watch our on demand webinar. Join Curiosity directors Huw Price and James Walker as they demonstrate how Model-Based techniques can overcome these barriers to successful test automation.
[1] Panaya (2018), The State of Functional Testing Today. Cited from https://www.devopsonline.co.uk/14159-2-ai-and-automation-vs-human-testers/. Retrieved on 1 July 2019.
[2] Panaya (2018), The State of Functional Testing Today. Cited from https://www.devopsonline.co.uk/14159-2-ai-and-automation-vs-human-testers/. Retrieved on 1 July 2019.
[3] Dortothy Graham and Mark Fewster (2009), That’s No Reason to Automate. Retrieved from http://www.dorothygraham.co.uk/downloads/generalPdfs/NoReasonAut.pdf on 1 July 2019.
[4] Capgemini, Micro Focus, Sogeti (2019), World Quality Report, 11. Retrieved from https://www.sogeti.com/globalassets/global/wqr-201819/wqr-2018-19_secured.pdf on 19 June 2019.
[Image: Pixabay]
This is Part 1/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier...
Each year, organisations and consumers globally depend on Oracle FLEXCUBE to process an estimated 26 Billion banking transactions [1]. For...
Banks globally rely on Oracle FLEXCUBE to provide their agile banking infrastructure, and more today are migrating to FLEXCUBE to retain a...
The QA community has been speaking about functional test automation for a long time now, but automated test execution rates remain too low. A major...
This is Part 2/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier...
Despite increasing investment in test automation, many organisations today are yet to overcome the barrier to successful automated testing. In fact,...
Continuous Integration (CI) and Continuous Delivery or Continuous Deployment (CD) pipelines have been largely adopted across the software development...
Test teams today are striving to automate more in order to test ever-more complex systems within ever-shorter iterations. However, the rate of test...
This is Part 3/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier...