Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

16 min read

10 Features Every Codeless Test Automation Tool Should Offer

10 Features Every Codeless Test Automation Tool Should Offer

The QA community has been buzzing this past month as its members and vendors respond to Angie Jones’ insightful article, 10 features every codeless test automation tool should offer.

The article highlights several features lacking among previous generations of test automation technologies, several of which are still often found lacking today. The list is addressed to “codeless test automation tool vendors”, and numerous have now responded. It ends with a second call, to testers, to offer additional features that “should be required in the next generation of codeless test automation tools.”

Beyond Record and Playback

A response to Angie Jones’ “10 features every codeless test automation tool should offer”

This late entrant to the arena of debate attempts to respond to both calls. It aims to explains how Curiosity’s codeless automation solutions fulfil the ten features listed. In so doing, it flags up additional functionality that automation tools should offer if they are to enable QA teams to create and maintain rigorous automated tests that can keep up with the fast-changing applications.

Prohibitively technical, impossible to maintain: perennial barriers to test automation adoption

Two overarching drawbacks of existing technologies stand out in Jones’ article. Both in part explain why rates of automated test execution remain so low at numerous organisations.[1] Firstly, there’s the complexity and deep technical ability required by coded frameworks, in a time when automation engineers are in short demand.

Secondly, there’s arguably the greatest barrier to sustainable test automation: maintenance of brittle automated tests. This might be checking the validity of a growing mountain of test scripts as a system changes, before manually updating them. For record and playback tools, Jones similarly notes how “updating a test script often required re-recording the entire test.”

The result in both instances is a growing mountain of technical debt, while the time demanded by test maintenance can quickly outweigh the time saved by automating execution. Both issues can be avoiding by adopting a structured, Model-Based approach.

Beyond Record and Playback: an overview of Model-Driven Development

This article will focus on The VIP Test Modeller, and how it can be used in conjunction with Curiosity’s UI Recorder and workflow engine to generate automated tests from recorded activity.

This codeless approach is not “Record and Playback” in any conventional sense, and does not produce brittle tests that can only mimic or parrot recorded activity. Instead, recorded activity is converted automatically into formal flowchart models, complete with the automation logic, data, and message activity needed for automated test execution.

Each action or set of actions are assigned to a node in the model. They are thereby fully re-usable and malleable, and can be re-assembled at the model level to build additional tests.

In this approach, testers can apply automated coverage algorithms to the enhanced model, generating a set of automated tests that exhaustively test the modelled logic. Alternatively, automated testing can focus on new or high-risk functionality, based on test history, rigorously testing fast-changing systems depending on time and risk. In both approaches, the optimised tests go beyond the functionality covered in recordings.

If the system changes, testers only need to update the central models, maintaining the automated tests and data. This avoids the need to re-record copious amounts of activity, or update copious numbers of tests by hand.

The approach is demonstrated in the below video, moving from a web UI to fully maintainable, rigorous tests in minutes. We now turn to consider each of Jones’ “10 features every codeless test automation tool should offer,” discussing how this structured, Model-Based method fulfils them.

 

 

To infinity and beyond: 10 features every automaton tool should offer, and then some.

1.    Smart Element Locators

Jones is bang on the money to highlight brittle automated tests as a major challenge for both coded and codeless automated frameworks. As the latest Curiosity eBook points out, “any time maintaining existing assets will quickly become unsustainable. As the growing complexity of systems will soon mean that there are more assets than can be updated in-sprint.”

Jones’ article suggests recording multiple element locators as a solution, whereby automated tests can fall back on alternative locators if the primary one can no longer be found in the system under test.

The article rightly highlights the flexibility that comes with having multiple locators available. Both The VIP Test Modeller’s object scanner and Recorder accordingly capture a raft of locators that can be used in automated tests.

These locators are specified at the model level, as actions assigned to specific nodes. A locator from one recorded test can be leveraged quickly to create another test that executes against the same page element, while locators can likewise be swapped or edited when needed.

The VIP Test Modeller further provides a confidence rating for each locator, so that automated testers can select the locator most likely to successfully execute automated tests. These locators are then used by the automated tests generated automatically from the model.A response to Angie Jones’ 10 features every codeless test automation tool should offer_1

A range of locators captured in one click by the page Object Scanner.

This flexibility enables testers to build new tests quickly from recorded activity, maximizing testing rigour. However, setting primary and secondary locators might not be the best response to brittle tests, and the bottlenecks associated with test maintenance.

Multiple locators might create automated tests that run in spite of change, but a better aim is a test automation framework that adapts to change. Automated tests should react to system design changes, being auto-updated to reflect the very latest system logic. That way, tests not only run, but are valid and rigorously validate that the developed system reflects desired user functionality.

Automated maintenance is possible when a link is created between tests and requirements logic. When the system designs change, automated tests change with them. QA in turn becomes a largely automated comparison of how the system should work, to the system that has been developed in code.

The Model-Driven approach provided by The VIP Test Modeller facilitates automated test maintenance. Models are built directly from written or visual requirements, importing Gherkin specifications and BPMN diagrams automatically for instance. Recorded, re-usable automation logic, including element locators, can then be overlaid directly onto the same models:test automation framework

A reactive test automation framework generates automated tests from requirements
models.

The model acts simultaneously as a requirements model and a central asset from which automated tests, data, and virtual end-points are generated. Update the model, and you update the test pack, maintaining a valid set of rigorous automated tests that are not only executable, but reflect the very latest user needs.

5.    Modification without redo

Automated test maintenance leads nicely onto the fifth feature on Jones’ list: the ability to modify automated tests without having to re-record the end-to-end journey through a system.

Mapping recorded activity to flowcharts enables quick and easy modification. Every recorded action is overlaid onto a node, complete with the test data and automation logic needed to execute the action. The action can be added, edited, removed, or duplicated to create new tests, and can further be dragged-and-dropped to new models from a central repository.A response to Angie Jones’ 10 features every codeless test automation tool should offer_3

Harnessing recorded activity to edit an automation module in The VIP Test Modeller

More than this, QA Teams can share and re-use whole models from a central library, along with the automated tests associated with them. Subflows enable a “low code” approach, dragging-and-dropping functionality to form end-to-end tests.

6.     Reusable steps

Subflows and the re-usability of individual nodes in the model likewise tackles the need to re-use common steps. As Jones points out “some steps exist in multiple scenarios”, for instance “logging into an application.” This can create “maintenance nightmares” if a tester needs to update every test in which the common step occurs following a change.

Jones rightly argues therefore that “codeless tools should allow authors to record common steps that they can then insert into any test flow.” This is precisely the case with The VIP Test Modeller, where common steps are dragged-and-dropped as subflows to flowchart models from a central repository, and can likewise be maintained in one fell swoop by updating the central model.

A Model-Based approach additionally avoids wasteful over-testing that can occur when the same test steps feature in numerous scenarios. Executing every test step every time it occurs might not be possible when faced with a system of any degree of complexity, as the number of tests would be too vast, even with automated test execution.

Instead, a structured approach to test design is needed, to reduce the number of tests without compromising testing rigour. A Model-Based approach enables coverage-driven test generation, creating automated tests that execute every test step at least once, or that execute every logically distinct combination of test steps. The result is rigorous, risk-based automated testing that fully “covers” the system logic, while remaining possible within the confines of short iterations.

3.     Control Structures

Subflows and formal modelling also remedy “the absence of control structures, such as loops and conditional clauses.” These are necessary for quickly creating tests that repeat the same action, as opposed to having “to record that action 10 times and maintain each of the actions individually.” Tests can likewise be generated when there are multiple viable alternatives for a given step, depending on conditions.

The VIP Test Modeller uses intuitive, visual modelling to achieve this logical flexibility, whereas “in coded automation frameworks, testers use loops and if-else clauses to control the flow of scripts.” Instead of requiring this level of complex coding ability, the models break a system down into its core cause-and-effect logic, with condition blocks and process blocks reflecting a series of “if this, then this” statements.

Generated tests are equivalent to paths through the model, comparable to routes through a city map produced by a GPS. A test step in turn only needs to be defined once, and will feature in every path which passes through it under specified conditions.

Subflows meanwhile handle looping in a manner that is both scalable and maintainable. An action must only be recorded and/or modelled once, and then is repeatable wherever it occurs in the model. If a test needs to loop back to a previous action under certain conditions, the subflow is simply duplicated at that point:A response to Angie Jones’ 10 features every codeless test automation tool should offer_4

A loop is defined at the model level for a log-in screen. If an automated test does not
provide valid credentials, the system remains on the same log-in page, ready for the
test to enter either valid or invalid details as a user would.

2.    Conditional Waiting

Deriving tests from logical flowchart models of the system under test additionally enables “conditional waiting”, where “scripts don’t blindly wait x number of seconds before continuing to the next step”, but “wait until a condition is true and then proceed as soon as possible.”

If a test step cannot be executed, the test fails, and the run results will reflect exactly which node in the model is associated with the problematic step. As Jones then argues, this “drastically cuts down on the execution time of the automation suite while also preventing flaky tests.”

4.    Easy Assertions

Jones argues that a codeless automation tool should give “great consideration” to “adding assertions”, “since it’s the most important part of the test script.” Defining assertions should be “as simple as adding the navigation steps”, meaning that the validation should be intuitive and “easily expressed”.

The challenge of assertions, Jones notes, is that such validation is not usually performed by an action that can be readily recorded. It might instead be something “you do with your eyes”.

Using The VIP Test Modeller, a range of assertions are easlily defined in exactly the same way as Actions, using the same Module Editor shown above. This adopts a “low code” approach, where drop-down menus define the type of capture, with a small set of standard expressions that are easy to edit and re-use:A response to Angie Jones’ 10 features every codeless test automation tool should offer_5

Defining a mid-test assertion in The VIP Test Modeller.

A range of “Captures” are provided, including actions that “you do with your eyes”. Automated tests can validate that a Module has produced the correct result based on resultants URL, the content of the resultant screen, and more.

Assertions can furthermore be defined mid-test, among automated test Actions. This maximises observability and test confidence. It is valuable for making sure that a test has not only passed, but has passed for the right reason.

Mid-test assertions capture information at stages throughout a test, creating a trail of breadcrumbs through the system. Validation based on these observation points informs a tester that the system is functioning as expected at each point, rather than simply at the end of a test.

This works to avoid invalid test results created by false positives, and maximises the amount of information gathered during testing. It enables root cause analysis when tests fail, by virtue of the ability to analyze test failures to pinpoint the logical step in the system where several tests are failing.

The test failure is shown at the model level, highlighting exactly which paths have failed. Development can then identify the exact point in the system that is creating the bug from the logically precise models, working quickly to remediate the error.

7.   Cross-browser support

The seventh feature in the article highlights the frustration of browser-specific extensions, where tests can in turn only be executed against that specific browser. This contrasts with the numerous browsers which users are likely to use, all of which should be tested against pre-release. Instead, “test authors should be able to record a scenario once on a given browser, and be able to play that recording on any other major browser.”

With The VIP Test Modeller, re-usable actions are recorded using a Chrome browser-extension, but are then browser-generic. Automated tests defined once can be executed rapidly across multiple browsers, configuring cross-browser testing in minutes.A response to Angie Jones’ 10 features every codeless test automation tool should offer_6

A single automation action opens a URL in a browser selected from a drop-down menu.

“Open Browser” might typically be the first step in a test, equivalent to one Start Point block in a model. The automated tests executes all subsequent logic contained in that model against the selected browser.

Instead of re-recording tests against different browsers, QA teams only have to create different Start Points that open different browsers. This is as quick and easy as copying and pasting a block, and selecting a different option from a different browser, to re-use the automation logic defined throughout the model:A response to Angie Jones’ 10 features every codeless test automation tool should offer_7

Cross-browser testing is set up in minutes using multiple start points to re-use the
flowchart’s automation logic and data.

Rigorous testing in the digital age must furthermore extend beyond the browser, executing tests across multiple devices, as well as beyond the UI. It must execute tests on mobile, while going beyond the UI, into the database layer, and up into the API or Service Layer.

The VIP Test Modeller defines tests in Selenium and Appium, the latter enabling mobile testing. The tests are executed using VIP, Curiosity’s fully connected, high-speed workflow engine. Tests generated in The VIP Test Modeller can therefore be executed using a combination of new or existing test automation frameworks, and in a broad range of environments. This rapidly re-uses automation logic defined once, testing against the full range of environments used in production.

Tests generated from The VIP Test Modeller additionally validate the database, service, and message layer, in spite of the simplicity of the model-based test generation. Recorded activity includes any API interactions and database activities, while database values and Request-Response capture can be used to build a fuller picture of multi-tiered architecture.

This works to create complete models of a system’s architecture, and the myriad of dependencies that exist in it. Paths through these detailed but easy-to-use schematics are tests, complete with the mid-test Assertions needed to observe actual results at the message and database layer.

This maximises observability, enabling QA to truly “assure” that they got the right result, for exactly the right reason. Testing can validate that the UI, low level plumbing, and system-wide integrations are all functioning as intended.

8.   Reporting

This observability is only valuable with sufficient reporting of test results, the eight feature on Jones’ list. When “executing hundreds – or thousands – of tests”, testers must be able to quickly identify whether failures are attributable to a genuine bug in the code, or to invalid tests. This should not require “reruns or extensive debugging”, as Jones notes, and automated testing must additionally be able to identify the exact point of failure in a systems logic.

Run results must not only be accurate, but also sufficiently granular. This first requires comprehensively and accurately defined expected results. Without these, test teams cannot determine with any confidence if the system is behaving as it should, as there is no proper definition of how it should behave.

Secondly, run results must be formulated quickly. That means automatically when faced with “hundreds – or thousands – of tests”. Test teams cannot laboriously compare copious actual results to the expected results manually, for instance scanning through vast spreadsheets of results.

VIP automatically compares expected and actual results when executing tests generated from The VIP Test Modeller. The run results are then fed back into The VIP Test Modeller, and are updated across Application Lifecycle Management tools:A response to Angie Jones’ 10 features every codeless test automation tool should offer_8

Complete run results are generated automatically during test execution and can be
inspected at the model or test level in The VIP Test Modeller. A new set of granular tests
are then generated to pinpoint the point of failure in tests, performing root cause analysis
to identify any failure in the system’s code, or the tests themselves.

In The VIP Test Modeller, the complete run results can be inspected at the model or test level, browsing through paths that correspond to failed tests. Graphical overviews further provide insights for test managers, while tabular results are provided for individual test steps, linked to their corresponding models.

This granular reporting enables root cause analysis to pinpoint the exact point of failure. Test teams can identify overlapping test steps in failing tests, which in turn correspond to a logical step in the system model. If needed, they can generate a new set of tests that focus on the logic surrounding the point of failure, working to identify its cause.

QA teams can in turn provision a logically precise bug report to development, providing a visual model that highlights the point of failure in the system. Developers can work to efficiently remedy the bug, without having to analyse prohibitively complex system logic to locate it, and are not sent on wild goose chases when test failures are the result of invalid tests.

10.   Continuous Integration

The ability to execute automated tests using existing frameworks and update run results across ALM tooling brings us to the last feature in Jones’ list: Continuous Integration. Jones observes how “in the era of DevOps … Tests should integrate with [DevOps] pipelines and automatically execute when triggered”. They “should be capable of running in parallel as well.”

Using the VIP workflow engine to execute tests facilitates a “single pane of glass approach” enabling a common technology stack to spark automated processes. VIP comes equipped a broad range of out-of-the-box and custom connectors, that can be used to integrate existing automation frameworks into DevOps pipelines.

An automation framework can thereby be easily connected and triggered from any existing technologies in use at an organization. For instance, a Slack bot might be created to not only execute a re-usable set of tests, but to spin up test environments and provision test data too. Run results might then additionally be reported in Slack, and updated in ALM tools like JIRA. This works to keep information automatically aligned across DevOps pipelines.A response to Angie Jones’ 10 features every codeless test automation tool should offer_9

A sample range of connectors available using the VIP workflow engine.

VIP further offers batch processing to execute automated tests and the surrounding processes in parallel. This provides the high-speed test execution needed to complete the number of tests required for complex systems, even within short iterations.

However truly “automated” testing within a DevOps framework must extend beyond just automated execution. It must extend to other “TestDev” tasks that can slow testing down, many of which have been already discussed: test creation and maintenance, test data provisioning, and spinning up environments.

Automation must additionally extend to the “TestOps” tasks that slow QA teams down. These are the repetitious, rule-based processes that surround test asset creation and execution. They are standardised tasks that focus on organisational practices and internal communication, from inputting testing metadata into ALM tooling, to providing email and chat updates to managers and teams.

These processes are invaluable to cross-team collaboration and facilitate project management. However, they are time consuming and detract from developing new tests to execute against the latest system. Fortunately, such process-driven, standardized tasks are ripe for automation.

Robotic Process Automation, or RPA, arose in this world of operations, and uses similar technologies to test automation. RPA introduces non-invasive bots to mimic rule-based tasks otherwise performed by testers, and thrives when executing repeatable tasks.

RPA is a growing trend within testing, as organizations seek to free test teams’ time to focus on testing new functionality within short sprints. Using VIP with The VIP Test Modeller combines automated, optimized TestDev automation with high-speed RPA, reliably executing both TestOps and TestDev tasks.A response to Angie Jones’ 10 features every codeless test automation tool should offer_10

High-speed Robotic Process Automation automates repetitious, rule-based tasks as tests
are created and executed from The VIP Test Modeller.

As tests are maintained in The VIP Test Modeller, VIP will keep test metadata up-to-date across technologies, for instance inputting test cases, data, and virtual services into Application Lifecycle Management tools and QA environments. VIP further executes automated tests generated in The VIP Test Modeller across distributed environments, updating run results in requisite fields across tools and providing email and chat updates.

 

This replaces the need for testers to repetitiously copy and edit data across numerous tools, allowing them to focus on developing test assets to test new and evolving logic.

9.   Ability to insert code

Jones’ article rightly highlights the need for flexibility, as ‘no codeless tool can incorporate everything that’s possible”. Numerous tools by contrast trade simplicity for robustness, and organisations can become locked-in to out-of-the-box functionality. This might not be able to test the complex logic of their system under test, particularly if limited by a vendor-specific set of keywords.

“Codeless” automation tools should instead be fully customizable, enabling QA teams to create tests tailored for their environment, rather than the constraints of a test automation framework. There should still be a broad range of out-of-the-box functions to cover as many scenarios as possible; however, for those inevitable, organization specific “edge cases”, the ability to add custom actions is a must.

Executing tests from The VIP Test Modeller using the VIP workflow engine not only facilitates Continuous Integration and Robotic Process Automation, it further enables full customization. Automated tests can be executed as part of a larger, custom VIP workflow, that defines additional, custom actions. These range from resolving the test data, to updating fields in databases, and interacting with systems or applications.

Testers can create these workflows using a “low code” approach, dragging and dropping common processes from a comprehensive range of functions. They can additionally build custom actions rapidly using a standardized process, whereupon the custom actions become fully re-usable.

The complete range of standard and custom integrations provided by VIP mean that any automation action can in principle be defined, as does the ability to interact with systems at the UI, database, and API layer.

Beyond Record and Playback: 10 Additional features a codeless test automation tool should offer

A Model-Based approach enables the creation and maintenance of rigorous automated tests within short iterations. With The VIP Test Modeller, this goes beyond traditional record and playback, using recorded activity as an accelerator to auto-build tests and data that are optimized to cover the system’s logic.

This article has sought to demonstrate how The VIP Test Modeller fulfills Angie Jones’ “10 features every codeless test automation tool should offer”. In so doing, it has highlighted additional features that Curiosity believe enable rigorous automated testing:

    1. Automated test maintenance: Existing automated tests must react to changes made to the system. Any time spent checking and updating tests manually will quickly become unfeasible, as the number of tests that have to be maintained will demand more time than is available in short iterations. Creating traceability between automated tests and the system requirements can automate test maintenance. This link between requirements logic and test logic is made possible with The VIP Test Modeller, generating automated tests directly from requirements created by Business Analysts.

    2. Test coverage and measurability: Recorded tests are only as good as the recordings, and rarely cover a system’s logic sufficiently. QA teams must be able to identify what functionality automated tests are testing, and how much of a system’s logic this covers. They must be able to enhance the amount of a system covered by automated tests, applying a structured approach to ensure automated testing covers a system fully based on time and risk. This is possible with a systematic, automated approach to test generation, as is made possible by the mathematical precision of flowchart modelling in The VIP Test Modeller. These flowcharts can be generated directly from recorded activity, moving directly from recordings to optimised automated tests.

    3. De-duplication: A structured, coverage-driven approach further allows test execution time to be reduced, without compromising testing rigour. De-duplication of existing tests is possible in a Model-Based approach, identifying overlapping test steps and consolidating test cases to ensure that the same logic is covered in a fraction of the time.

    4. Mid-test assertions: The ability to validate individual test steps drives up observability. This increases the amount of learning made possible during automated test execution, and also enables testers to make sure that they got the right result, for the right reason.

    5. Multi-layer recording and testing: Jones’ article mentions the test automation pyramid, but the focus remains on UI Testing. Automated testing should extend beyond the UI, validating that a system is functioning as intended across its multi-tiered architecture. Logic designed for computer consumption like APIs is well-suited to automated test execution, while automated testing must additionally check that the back-end is being updated as expected. Automated tests generated from activity executed against the UI must accordingly capture the information needed to test multi-tier, and this should also be codeless.

    6. Complete cross-platform testing: Automated tests should mimic the full range of environments found in production. In addition to cross-browser testing, rigorous automated testing must be performed across devices too.

    7. Expected results: Expected results must be accurately and comprehensively defined, as otherwise QA Teams cannot say with confidence that their tests have passed. Expected results should reflect the user’s desired functionality, and this is best achieved by deriving them directly from requirements that capture user needs. This is made possible by The VIP Test Modeller, in which both automated tests and expected results are derivable from system requirements. Meanwhile, expected results are automatically compared to actual results during test execution, inputting complete run results across ALM tools.

    8. Test data: Like expected results, the data needed to execute automated tests should also be found or generated at the same time as automated tests. Data should be created for every test, and should be maintained as they change to avoid bottlenecks associated with data constraints and maintenance. The data might initially be captured during recording, and should then be linked to the automated tests. However, it must further be defined dynamically, updating as the tests do. The VIP Test Modeller therefore adopts a “Just in Time” approach to data provisioning, resolving test data as the latest automated tests are executed.

    9. Robotic Process Automation: Automating test execution alone leaves numerous testing processes untouched. These are often repetitious and time-consuming, detracting from time spent testing new or critical functionality. Fortunately, these standardised “TestOps” tasks are ripe for automation, and combining test automation with RPA achieves this.

    10. Root cause analysis: Developers and system architects don’t just need to know that a test has failed – they need to know where, and why. Root cause analysis pinpoints failure points within a system, and is possible if automated tests are linked to logical models of the system. It can further be automated, generating automated tests that focus on logic surrounding test failures, to pinpoint exactly which nodes in a system model are shared by failing tests.

Let’s carry on the conversation.

If you think there are more must-have features, please Tweet us on @CuriositySoft, or get in touch on Info@Curiosity.Software. Likewise, please do not hesitate to get in touch if you have any questions regarding the technologies discussed in this article, or if you would like to arrange a demo.

Book a Demo

[1] http://www.devopsonline.co.uk/14159-2-ai-and-automation-vs-human-testers/

[Image: Pixabay]

5 Reasons to Model During QA, Part 4/5: Faster QA Reaction Times

5 Reasons to Model During QA, Part 4/5: Faster QA Reaction Times

Welcome to part 4/5 of 5 Reasons to Model During QA! If you have missed any previous instalments, use the following links to see how modelling can:

Read More
Containers for Continuous Testing

Containers for Continuous Testing

Application development and testing has been revolutionised in the past several years with artifact and package repositories, enabling delivery of...

Read More
Assuring Quality at Speed With Automated and Optimised Test Generation

Assuring Quality at Speed With Automated and Optimised Test Generation

Throughout the development process, software applications undergo a variety of changes, from new functionality and code optimisation to the removal...

Read More
5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

Welcome to part 3/5 of 5 Reasons to Model During QA! Part one of this series discussed how modelling enables “shift left” QA, eradicating potentially...

Read More
Model-Based Testing for Microsoft Dynamics 365

Model-Based Testing for Microsoft Dynamics 365

Microsoft Dynamics 365 is a highly versatile and powerful tool for enterprise resource planning (ERP) and customer relationship management (CRM). A...

Read More
5 Reasons to Model During QA: “Shift Left” QA Uproots Design Defects

5 Reasons to Model During QA: “Shift Left” QA Uproots Design Defects

Model-Based Testing (MBT) itself is not new, but Model-Based Test Automation is experiencing a resurgence in adoption. Model-Based Testing is the...

Read More
Overcoming Challenges in Test Automation: Re-usability is the key

Overcoming Challenges in Test Automation: Re-usability is the key

The 2020/1 edition of the World Quality Report (WQR) highlights how the expectation placed on test teams has been growing steadily. QA teams today...

Read More
How to Scale Mobile Test Generation

How to Scale Mobile Test Generation

Welcome to Part 5/5 in our “Scalable Mobile Test Automation" series!

Read More
Bringing Clarity to Complexity: Visual Models in Requirements Engineering

Bringing Clarity to Complexity: Visual Models in Requirements Engineering

In the dynamic, interconnected world of software development, clarity is key. Yet, requirements engineering - the process of defining, documenting,...

Read More