Introducing “Model-Driven Development”: the agility of BDD, the rigour of Model-Based Testing

Behaviour-Driven Development (BDD) emerged in 2006 [1], partly in response to perennial test and development painpoints lingering in spite of “agile” methodologies. A ubiquitous language shared across design, development, and QA would avoid the frustration of miscommunication, and the defects it perpetuates. 

Test and development would instead be a steady journey of mastering the system being built, rather than a wild goose chase of finding out too late that you’ve misunderstood the desired system. Engineers meanwhile would know exactly what to prioritise when updating fast-changing systems within short iterations. 

BDD Today: New approach, or same old barriers? 

BDD has since gown widespread in the QA community, and is now a commonplace concept at large banks and financial organisations, for instance.  

However, “doing BDD” for these teams often means using a specification language like Gherkin to formulate scenarios, and a test automation framework like Cucumber to execute them as tests. Many of the procedural and principled challenges that BDD sought to uproot remain, with automation frameworks and specification languages serving as point solutions to individual problems: 

Test and Development Bottlenecks
Common test and development bottlenecks persist in “mini-Waterfalls” .

Miscommunications, defects, and bottlenecks persist, stemming from poorly specified requirements and a siloed approach. The bugs are further still detected late, by overly manual QA that starts late. Considered from the perspective of people, process, and technology, numerous pre-’BDD’ problems persist:

1.     A siloed approach.

Organisations often still design, code, and test systems – in that order. Miscommunications and delays creep in at each stage, as teams wait for the information they need to fulfil their siloed roles to be passed on.

The difference is that teams try to squeeze all this into six week “mini-Waterfalls”, rather than 18 month projects. QA is pushed ever-later, rolling constantly over to the next iteration and leaving the bulk of a system exposed to defects.  

2.     Poor designs leave developers guessing.

Fragmentary user stories have replaced the monolithic requirements documents that stood at the start of Waterfall projects. These disparate scenarios are individually incomplete, but are not connected formally into a complete system either.

The natural language used to formulate Behaviour-Driven scenarios is furthermore good for designing UIs and front-end systems, but poorly suited for machine logic and APIs. It also tends to be ambiguous and logically imprecise, so that the majority of defects that can be traced back to requirements remain. 

3.     Low test coverage and over-testing.

Automated tests derived from Behaviour-Driven scenarios are naturally incomplete, focusing on the “happy path” scenarios that are specified as desired user behaviour.

QA therefore leaves much of the system’s logic exposed to defects, and particularly the negative scenarios that are most likely to cause more severe defects. At the same time, however, certain logic will be repetitiously over-tested, by virtue of the linear nature of Behaviour-Driven Scenarios that are not consolidated during testing. 

4.     Data and environments undermine rigour and agility.

QA teams often still rely on centralised, overworked Ops teams to copy complex production data and spin up environments. Further time is therefore wasted as testers wait for data and environments to be provisioned, or wait for limited resources to be used by another team.

The large copies of masked production data are furthermore low-variety, containing only the expected scenarios that have occurred in the past. They do not contain the outliers and unexpected results needed for rigorous testing, and by definition cannot test functionality not yet released to production.  

5.     Technical debt and manual maintenance.

The disparate, unconnected scenarios further lack formal dependency mapping. This means that the upstream and downstream impact of new stories or change requests must be identified manually across increasingly complex systems.

Low-priority changes in turn throw up system-wide defects, while test cases, data, and automated tests must all be updated manually to reflect the latest changes. This manual maintenance is slow and repetitious, and can quickly swallow up whole iterations at the expense of testing new or critical functionality.

Model-Driven Development: A Return to the Principles of BDD. 

This siloed, over-manually approach stands in contrast to the original principles of BDD. It undermines parallelism and the ability to move desired user behavior quickly from design to deployment. 

“Model-Driven Development” offers an alternative approach that retains BDD’s flexibility to fast-changing user needs, while introducing a greater degree of rigour and automation. As the below video demonstrates, QA teams can move automatically from existing artefacts, to the complete models needed for automated and optimised testing: 

This approach builds on existing tests and Behaviour-Driven scenarios to combine the principles of BDD with the rigour of Model-Based Testing. It enables collaborative business and engineering teams to: 

1.     “Shift left” to design better systems.

It is possible to build complete, formal models within short iterations. Behaviour-Driven requirements and existing tests can be converted automatically to flowcharts, for instance, using Gherkin importers and a UI Recorder. 

Flowcharts are already familiar to BAs, many of whom already use Business Process Models. The models therefore retain a “ubiquitous language”, while developers and QA can work in parallel from the same system design.  

This facilitates a “shift left” approach where testers help to build better quality, testable systems, removing potentially costly defects during the design phase. The flowcharts have the local precision needed to eradicate design ambiguities, while incompleteness is more easily spotted at the model-level.  

2.     “Shift right” and derive a complete set of tests directly from the design.

The Model-Driven Approach is also Test-Driven, with automated tests and data derivable directly from the flowcharts. QA then becomes a largely automated comparison of the designs to the code, working in parallel with requirements gatherers.  

This not only eradicates the bottlenecks associated with manual test case design, but drives up the quality of testing. Automated coverage techniques generate the smallest set of test cases needed to exhaustively test the requirements model, while Risk-Based approaches can target new or critical functionality based on time and test history.

3.     Implement fast-changing user needs within short iterations.

The automated tests and test data have been derived directly from the system designs, and are therefore traceable directly to them. This enables automated test maintenance: as the central designs change, the rigorous automated tests update automatically. QA can therefore focus on developing test assets for newly introduced functionality, testing fast-changing systems rigorously. 

Full dependency mapping meanwhile means that when the designs are updated, developers can know exactly what across complex systems has been affected, implementing the change reliably and without potentially catastrophic unforeseen consequences. 

Model-Based Testing can facilitate a truly Behaviour-Driven approach, capable of delivering rigorously tested software within short iterations. It retains all the agility of Behaviour-Driven development, while introducing the rigour of Model-Based Testing to short iterations:

Test Automation for BDD
Model-Driven Development: An example iteration combining Behaviour-Driven Development and Model-Based Testing.

If you would like to find out more or arrange a demo, pleasefeel free to get in touch on info@curiosity.software  

[1] Dan North & Associates (2006), Introducing BDD.

[Image: Pixabay]