This is Part 2/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier architecture, and across the testing pyramid. Read Part One here, or download the whole series as an eBook.

Complete test case design: generate functional tests and data that fully “cover” multi-tier systems

This is Part 2/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier architecture, and across the testing pyramid. Read Part One here, or download the whole series as a paper.

Part one of this series set out the reasons why effective Load testing must choose from a vast number of possible tests. Unable to execute every test in short iterations, it concluded that rigorous Load testing must instead aim to test every logically distinct scenario that might be exercised in production.

Fortunately, the need to identify an executable number of tests that cover a large number of complex scenarios is not new. Model-Based testing is all about mapping the routes through a system’s logic, generating tests to “cover” every logically distinct positive and negative path.

These principles can be applied to performance testing, overcoming the complexity just described in Part One, and many of the challenges associated with Load testing across multi-tier architecture. This in turn leads to a new realm of testing, with tests that account for both functionality and performance factors. This introduces “Functional Performance Testing”.

Modelling complex systems rapidly

“Functional Performance Testing” also begins with a logical model of the paths that can be taken to various end-points in a system’s logic. The models might map users’ routes through a UI, and can also map the actions contained in APIs. The models can in turn be chained together, in order to generate tests that cut across multi-tier architecture.

Modelling maps out the paths that can be taken to each end-point involved in a system or component, with different data variables defined to exercise each logical step. For a UI, these models might map user activity and user-inputted data:

Model Based UI Testing
A quick-to-build flowchart model maps the routes a user can take through the UI of a log-in page, entering valid or combinations of invalid data into a username and password field. These routes can result in two endpoints: user authentication and login success, or failed authentication and login failure.

Using The VIP Test Modeller, these models are quick to build, and modelling is compatible with short, Agile sprints. A range of importers are available to convert existing tests and system requirements into models, from a range of formats. A UI Recorder can additionally be used to complete the logical models, and captured message data can be imported from tools like Fiddler.

Models similar in style can also be built to map APIs. These models contain the range of actions or methods by which an API might transform user-inputted or machine data:

Model Based API Testing
Different items can be entered into a shopping cart on an eCommerce store, leading to either a valid API call to Add Item, or to an invalid request.

The models of UIs and APIs can then be combined, in order to create models from which to test across multi-tier architecture.

Every model created in The VIP Test Modeller is re-usable, becoming subflows that can be dragged-and-dropped onto the canvas. This easy-to-use, visual approach chains the modelled components together to create master flowcharts that test across multi-tier architecture. For instance, subflows used to test a UI can be combined rapidly with subflow that generate API tests.

Testers can therefore combine models rapidly, creating flows that contain a rich set of user activity exercised against UIs, as well as the API calls generated by that activity. Flowcharts that are simple in appearance can contain information far beyond the cognitive capabilities of a human:

Subflows
Subflows chain together models of both API calls and the logic contained in UIs.

The assembled flowcharts account for the first three requirements of functional testing listed in Part One. First, the models account for the logical journeys that a user can take through a system, inputting data along the way. As shown in the above example, A UI might be modelled, defining the combination of fields into which users can enter valid and invalid data.

Second, the same models can include the machine data that can be generated by production activity, split by equivalence classes. Thirdly, the range of actions or methods involved in an API call can also be modelled, setting out how an API might exercise user-inputted or machine data. All of this information is in turn reflected in the tests generated from these models, creating tests that satisfy the first three criteria listed in Part One.

A significant advantage of this drag-and-drop approach is the ease with which complex chains of APIs can be tested, helping with the fourth criterion listed in Part One. The logic and data involved in each individual API or UI screen only needs to be modelled once, and can then be connected via their start and end-points. Mathematical algorithms will then identify the vast range of combinations involved, resolving the complexity of test case design for multi-tier architecture.

The chained-up models at this point represent the routes that can be taken through a system’s multi-tier architecture, arriving at a range of end-points. This is everything needed to generate a set of optimized test automatically using The VIP Test Modeller. However, automated testing additionally requires test data which to exercise the routes through the system.

Defining dynamic test for every possible test

With The VIP Test Modeller, test data variables are defined for each relevant node of the model, ready to be combined into logically distinct tests. This creates the range of data combinations that could be exercised against a system in production, both UI-inputted and machine data:

Test Data Definition
Data variables are specified to define the data variables that a user could create in their interactions with a system.

The data can in turn be rolled up into parameterised Load tests, injecting the data via messages executed against an API.

Where it gets smart is defining the test data values dynamically for each variable. This in turn creates diverse, high volume test data that resolves “just in time” during test execution. Realistic test data is thereby made available on demand, generating the data needed to test the range of functional logic involved in a complex system, while also testing it at a variety of Workloads.

The VIP Test Modeller enables you to define synthetic data functions for every node in your functional model. There are over 500 functions that resolve dynamically during test execution, all of which can be combined using a simple, visual functional editor:

Dynamic Test Data Definition
Dynamically defining data to test a username field in a UI: The Data Editor provides over 500 combinable, dynamic data generation functions.

This creates a diverse variety of data, reflecting accurately the real-world data that could be inputted into a system. The data covers information entered through UIs, as well as machine data. It can all be rolled up into messages to fire-off during testing, a process described later in this in Part Three of this series.

Automated and optimised test case design

Having designed the routes through the multi-tier architecture, the full range of distinct tests and data needed to reach each end-point in the model can be generated. This test case design is automated and optimized, by virtue of the flowchart models being formal, directed graphs.

The VIP Test Modeller uses mathematical algorithms to tackle the challenge of massive complexity, creating the minimum number of test cases which maximise test coverage. Tests are equivalent to paths through the flowchart, and the coverage algorithms create the smallest set of test cases needed to exercise every distinct path through the model, just as a car GPS can identify different routes through a city map:

Automated Test Case Design
Automated test case design: mathematical algorithms generate the smallest set of paths needed to exercise every logically distinct combination. This includes both user and machine activity, with tests “covering” both the UI and API.

The requirements for multi-tier testing fulfilled

This produces a set of test cases with associated data that will exercise the full range of distinct data scenarios that might occur in production, both user inputted and machine data. The tests additionally include the variety of distinct calls and data associated with any one API, as well as the logically distinct routes that can be taken through a UI.

They can moreover be chained together using a drag-and-drop approach, creating a set of test cases that can be executed within an iteration, but which nonetheless cover all the distinct logic in complex chains of API calls.

In other words, the automatically generated tests fulfil all four criteria identified above for testing across APIs and UIs. They cover:

  1. The full range of values that a user can input during production, both valid and negative.
  2. The full range of machine data that could be generated by users in production, via UIs or APIs. This includes content-type, session IDs, authentication-headers, user-agents, and more.
  3. The full range of methods or actions that API Calls exercise on the data.
  4. The combinations of all of the above, joined together into chains of API calls.

The article has focused so far on achieving testing rigour when faced with the complexity of multi-tier architecture. However, it is worth also noting some significant time-gains of this approach, that allows rigorous testing to occur in-sprint:

  • Automated test case design from models is generally far quicker than repetitiously and manually defining repeated test steps for a large number of test cases.
  • “Just in time” data resolution avoids the bottleneck and compliance risk associated with using production data, replacing slow data refreshes and cross-team constraints.
  • Expected results are created at the same time as test cases, avoiding the manual creation of hard-to-define responses.
  • Test maintenance is significantly accelerated, avoiding arguably the greatest automated testing bottleneck. The model is the source of truth and living documentation for the system. If a component changes, only the model for that component needs to be updated. QA teams can then quickly regenerate up-to-date test assets for every master flow in which that component features. This is far faster and more reliable than having to check and update test cases, test scripts, and data by hand. The time and reliability gains are particular significant for complex, multi-tier systems, where identifying every impact of a change across a myriad of interrelated components is highly complex. With manual maintenance, many impacts of a change go unnoticed, leaving much of the affected system untested. Tests cannot be updated quickly enough either, leading to a piling up of invalid tests that throw up automated test errors. By contrast, dependency mapping using flowchart models avoids this bottleneck: simply updating one subflow will reflect the change made to one component across every flowchart in which that component features.

The next section of this series turns to how these rigorous test cases can be executed automatically, both for functional and performance testing. This will cover the additional criterion listed in Part One for Load testing across multi-tier architecture: tests must include the diverse parameters needed to simulate the range of Load and stress that a system might be subjected to in production.

[Image: Pixabay]