The Test Data Allocation API

Search Knowledge Base by Keyword

< Back

The test data allocation API can be integrated with any tool or framework with.  The test data catalogue API exposes endpoints for all the Test Data Manager capabilities within the data catalogue user interface. For test data allocation there are two core endpoints you will require:

  1. Specifying the tests to run the associated allocations (finds and makes) across.
  2. Retrieving the results of an allocation.

Authentication within the test data catalogue API is achieved using an API Key. Therefore, to get started, you will first need to create an API Key within the Test Modeller workspace you want to consume and inter-op with.

To do this, navigate to the profile tab in the left side menu and view the details. This will show the API Key and API URL, take note of both of these. The API Key is unique to your account and the API URL is the endpoint you need to connect to.

API Key and API URL within Test Modeller

The API URL combined with the API Key will give you the ability to consume and connect to the data catalogues without needing any further authentication. At any time, if the key becomes compromised you can revoke and refresh the key associated with your account from the page you used to create it.

You can review the swagger API documentation by adding the following URL to the API endpoint:

/swagger-ui.html?urls.primaryName=Test%20Data%20Manager

For the cloud portal this can be accessed using the following URL:

http://api.cloud.testinsights.io/swagger-ui.html?urls.primaryName=Test%20Data%20Manager

We advise you review the API documentation available on the API you will be connecting to since this contains the appropriate documentation for your API’s version and capabilities.

There are two methods to follow in order to perform allocations.

  1. Execute Test Allocations

Firstly, you will need to create an allocation Job on your automation server. This will call the appropriate finds and makes and allocate the results within the specified allocation pools.

You can view our interactive documentation for this endpoint within our API documentation specified above.

Test Modeller API 1

The allocation endpoint takes three parameters:

  • {apiKey} – The API Key for connecting to your selected
  • {poolname} – The data pool name to perform the allocation
  • {servername} – The server to use for performing the data

POST – /api/apikey/{apiKey}/allocation-pool/{poolname}/resolve/server/{servername}/execute

The endpoint takes the following JSON body below which specifies the executions to perform resolutions against. This is a list of the allocation test names, pool names, and suite names to use for the resolution:

“allocationTestName”: “string”, 
“poolName”: “string”,
“suiteName”: “string”

2. Retrieve Allocation Results

Once the allocation has completed execution successfully the allocation tests within each data pool will have been assigned the appropriate test data. Now, you can query the API to retrieve these values and use them within your own framework or toolset.

You can view our interactive documentation for this endpoint within our API documentation specified above.

Test Modelller API JSON

The resulting API takes four parameters:

  • {apiKey} – The API Key for connecting to your selected workspace.
  • {pool_name} – The data pool to retrieve results from.
  • {suite_name} – The test suite to retrieve the results for.
  • {test_name} – The test name to retrieve the results for.

Get – /api/apikey/{apiKey}/allocation-pool/{pool_name}/suite/{suite_name}/allocated-test/{test_name}/result/value

This endpoint above returns the following body of allocated results for the test case. This is a list of the allocation values in a hash-based list, where ‘additionalProp1, additionalProp2, additionalProp3’ will correspond to the names of the output columns for the allocations which have been executed. This body is shown here:

“dataRows”
“additionalProp1”: “string”,
“additionalProp2”: “string”,
“additionalProp3”: “string”

We advise when using the data allocation API to bundle all the allocations to be executed up into one job that is executed as a pre-processing activity. This is (a) far more efficient since the appropriate engines only need to be span up once, and b) the allocation instance only ensures unique allocated values are valid within the execution session. It is also worth noting that once an allocation has been executed the results will persist as cached values within the data allocation API. You may choose to only perform allocation once (within the portals user- interface) and then retrieve the same results there are on after.