Overview of Solutions and Components

Search Knowledge Base by Keyword

< Back

This section is an overview of how to use and build up Test Data and Automation capabilities using VIP and TestModeller.

The core features are:

  • VIP – Visual Integration Processor. This allows the creation and running of VIP flows, these have a suffix of .vip or .enc.vip for encrypted flows
  • VIP Executor – one or many server engines. The VIP Executor server will pick up jobs defined in Modeller and execute them.  These jobs can be standard shipped VIP Flows, Internal Modeller processes or Custom VIP flows
  • Test Modeller – The UI that allows the definition and maintenance of modelling assets. Test Modeller also allows for the components to be registered and assembled into solutions.  These can then be invoked directly using VIP Executor – single job submission; by incorporating Solutions or components as part of models of variations; by incorporating Solutions or components as part of Data journeys
  • Batch execution – Components or solutions can be invoked in batch as part of DevOps processes, they can be invoked by:
    1. Submitting a job remotely to the VIP Executor
    2. Running a windows command scripts
    3. Running the flow directly
    4. By putting flows in memory using the VIP Server manager and then invoking the flow on the same or a remote machine.

Parameters are passed in via an xml control for 1
by editing a windows command script for 2
by entering the parameters into the Arguments section of the flows for 3
and by defining parameters in the invocation string for 4

VIP Flows – Components

Each VIP flow can be called stand-alone of by other flows or as part of a solutions.  Each component will be documented separately and can be called as part of multiple solutions.

Users are encouraged to learn how to use basic VIP flows as they provide extremely powerful data and automation capabilities.

When running flows you can see and edit the arguments.

VIP Editor

The arguments section lets you change parameters before running the flow.

Major parameters should have Annotations associated with the parameters:

You can right click to edit and define annotations

Running VIP Encrypted Flows

If you run an encrypted flow you will be able to see the annotations and the parameters.

You can edit the values and execute the flow stand alone.

The log file will be found in ..\…\flowlocation.enc.vip\output

 

Making Components Available in Modeller

For a component to be run as part of a solution or to be run using VIP Executor (To submit the required feature) the VIP flow must be added to a server.  This process makes the activity available to be used by modeller.

The steps to add the process to the server are:

  1. Copy the flow to the physical server that will be running the flow
  2. Add the definition of the flow to modeller (There are three different methods – see later)

N.B.       The VIP flow once added must remain the same physical location, for example
“C:\VIPTDM\MasterFileController\MasterFileController.vip” has been added as a process to server BIGONE.

N.B.       You can submit the same solution component on another server as long as the file location are the same.  Remember if you have a flow that has been changed it must be copied to all other servers otherwise you will get inconsistent results.

Adding the VIP definition to a server

Before you add a process (component) to a server check to see if the process has already been added.

Follow Profile/Automation Servers/Select your server

The server processes are unique by name, if it exists it may be easier to just add or modify any parameters.

Example server processes.

To add processes click on Add File

Method 1 – add the .vip.json file

Select the file with the suffix .vip.json

Enter the name of the process (remember It is unique – so the same VIP flow can be added as a different process by creating a different name)

Once you get to the parameters – click on the x, any non-important arguments (Ones without notations will be removed)

Example vip.json flow parameters.

With this method you have to go into each field and define attributes one by one.  It is easier to define these form attributes using an Excel spreadsheet.  See method 2.

 

Method 2 – Using an Excel definition file

If the flow is being shipped as part of a solution it will most likely be included in an Excel spreadsheet.

Example Add processes Spreadsheet

 

This spreadsheet contains the standard name and the parameters along with their descriptions, defaults, and form attributes.

Example Parameters sheet

 

Some of the parameters contain substitutions, starting with a ?  These will be substituted by the VIP Server job engine when being submitted.  For example:

?workdir will be changed to the unique folder name on the server associated with the job

?jobid will be changed to the modeller job number

 

Method 3 – Using the automationcontrollermastertemplate.xlsx file

When you install a server there a number of default VIP flows shipped.  These flows are contained in

A starter spreadsheet, this spreadsheet is run initially and can be updated and rerun to reinstall the standard flows.  Check in this sheet to see if the VIP flows (components have been included)

Example: AutomationControllerMasterTemplate

Some processes may be shipped but can be marked as inactive.  Set these to Y if you wish to activate them for use in a solution – then apply the new sheet to the server.

Click Edit the server to update the master template

 

Making Processes available to Flows

Once a process is available you can test it by running it from the VIP Executor menu

Example submission of process

Some processes may not be suitable to be submitted using this method, but it is good practice to make sure it runs correctly.

 

You must now associate the process with a Test Data Catalogue.  Find the process and click on QuickStart to set up a Test Criteria and Optionally an allocated test so you can test that it runs correctly.

Click on QuickStart

 

Understanding Parameters in Models

It is important that you know how parameters flow down to the final execution of the flow.  If you do not set a parameter the default from the next level down will apply.  The precedence order is as follows:

  1. The Model variable assigned to the automation activity
  2. The default of the Test Criteria parameter
  3. The process default variable
  4. The VIP flow default parameter

 

Also remember not all VIP Parameters are exposed up into the process definition, just as not all process definition parameters are used by associated Test Criteria.

When you run a process check the logs to see what variables are being used in the final process.

 

Solutions

Solutions are collections of flows usually linked to a required piece of automation or test data activity.

Example Solution to manage Messages.  The Solution refers to specific VIP flows (components).

Solution can be submitted from VIP Executor or as part of a model and will be executed as part of a job submitted to create code or execute TDA activities.

Solutions can be made up of several phases that can then be linked together.

Running Solutions from VIP Executor

Often you may wish to process files or databases and create configuration files that are then used by subsequent processes.   In this example.

A spread sheet formula is analyzed and returned in phase 1.  You can save the returned excel results, add in some values to test the expression and then submit the new Excel sheet to phase 2.  You can then add in and select Gherkin scenarios and submit to phase 3.

All three forms roll up to a solution.  The solution is shipped as standard inside AutomationControllerMasterTemplate.xlsx

For further details – see the Solution Parsing and Testing Excel Expressions.

Running Solutions as part of models

When creating solutions associated with a model there are a few core concepts that need to be understood.

Working Directories

When a job is submitted to a server it gets assigned a job id.  Each job will also create a unique folder on the server containing any files that are created or modified during the process.

For example, once you submit a job look in the folder and you will see various files have been added.  An example folder name would be:

C:\VIPExcelWork\TestAutomation\Work\6795 – 39672

This folder can be used by groups of components to sequentially process different automation activities.  So for example if you are creating various csv files in that folder using one Component, you could use another component to read these created csv later in the process.

When you add a component to a server you can assign variables to this folder by assigning ?workdir you can also use the job id as a variable by assigning ?jobid

Example assignment of variables to components

 

The Ordering of Jobs paths

When you are building a solution you can break the automation activities down into three types

  1. Pre process
  2. The path – data creation and data finding
  3. Post process

 

In the example below we have created some additional paths to manage the automation.  In this case we wish to perform some sweeper activities after each of the unique combinations of data has been created.

In order to set this up, create new Start/End journeys with Test Data Catalogue activities attached to the path.  Once you have created your test cases (Paths) rename the paths to start with  @Pre or @Post – if you wish to perform multiple pre or post – assign the test names alphabetically, for example @Pre-01-, @Pre-02- . These activities will then be run separately from the main data find or data creation paths.

In this example we have added in two extra paths with four post steps

  1. Resolve the csv, copy the csvs to a specific network folder, upload the data in that folder to neo4j
  2. Export the csv data to Kafka

The resulting process in this example will take the csv – created by the multiple paths, resolve it (post process), then create json messages for each row in the csv (post process)

When designing Solutions and components try and use common components across different solutions.  If the process requires differing combinations of data finds and makes then use the pre and post process features to complete the automation activities.