Discover Curiosity's AI-powered platform for redefining outer loop software delivery and productivity

Learn More
Book a Meeting

You are testing too big, isolate your blast radius!

Rich Jordan
|

5 mins read

Table of contents

The Delivery Times with Rich Jordan

If you're feeling frustrated with the speed, cost, and effectiveness of your testing efforts, you're likely a victim of accidental complexity. Accidental complexity is caused by testing too big and not isolating the areas impacted by changes to your system. This leads to bloated, inefficient test suites that provide little real feedback on the true risks introduced by new features or fixes.

Many organizations fall into the trap of building monolithic end-to-end test pyramids. All tests exercise broad user journeys across multiple components, backend systems, and third-party dependencies. On the surface, this approach seems prudent - comprehensively validating integrated customer workflows. However, it overlooks two fundamental testing tenets:

  1. You should minimize the blast radius when introducing change to isolate potential defects.

  2. Your tests should target areas impacted by change to provide focused, fast feedback loops.

By building a towering end-to-end test pyramid focused on major user journeys, you are guilty of testing too big. Instead of lean, targeted tests hitting just the components impacted by each change, you're blindly bludgeoning your entire application before promoting code. This violates the principles of isolating blast radiuses and delivering fast feedback loops.

Amplify Feedback Loops - Delivery Times with Rich JordanThe consequences? Brutal test cycles, flaky tests, huge data/environment overheads, and protracted analysis/triage efforts when failures inevitably occur across components unrelated to actual changes. You've unwittingly fallen into a costly, unproductive loop of over-testing while still missing key risks. It's time to isolate testing blast radiuses to regain velocity and insight.

Splitting the software monolith

The first step is decoupling your monolithic application architecture. Even if you've adopted microservices, you likely still have a distributed monolith making system behaviour opaque across disparate components. Your goal should be establishing clear architectural boundaries with well-defined interfaces between components.

Discovery techniques like Domain-Driven Design, Event Storming, and Example Mapping can help surface bounded contexts and components.

With decoupled components behind explicit interfaces, you can break away from bloated end-to-end tests. Instead, use component tests targeting just the functionality impacted by a given change, ignoring unrelated parts of the system. By isolating blast radiuses, your regression packs stay lean and focused while affording faster feedback loops.

Isolating blast radiuses for testing

Learn how you can divide complex architectures into “blast radiuses” that isolate the latest system changes, automatically creating virtual sandboxes and synthetic data to test them on demand.

Watch Webinar

Testing at the right layer

Even with componentized architectures, you still risk the trap of excessive integration testing if you don't consider the type of test needed for each architectural layer. A good test approach matches the layer being changed.

For example, when altering backend services and data stores, choose tools like in-memory databases and contract testing to validate service layers efficiently without spinning up entire backend clusters. For changes to APIs, write targeted API/contract tests against API specifications.

At the UI layer, the emphasis should shift from long-running end-to-end journeys to leveraging visual validation and component/integration tests. These provide fast feedback while avoiding bloated UI automation. Focus on isolated UI units impacted by a change rather than regressing the entire application.

These layer-targeted techniques contrast with writing full end-to-end flows across every layer for even trivial changes, which inflates test suites and execution times. Just focus testing on hitting the layer(s) where a change occurred to isolate blast radiuses properly.

Modelling for isolation - Delivery Times with Rich Jordan

Push testing even further left

Even with component/layer isolation, mistakes still make it to the testing stage when design flaws and bugs slip through requirement gathering and code implementation. True risk reduction requires driving comprehension and executable tests as far "left" as possible.

You can achieve this through Model-Based Testing techniques that formalize system specifications into comprehensive conceptual models before coding ever starts. From these living models, teams generate test cases mapping uniquely to every valid scenario across functional boundaries.

This proactive approach catches misunderstood requirements and missing tests early through model review sessions with cross-functional teams. Any behaviour gaps can be resolved through model iterations before consuming limited coding/testing cycles.

With thoroughly defined models representing a system's intended behaviour across components, your testing efforts shrink dramatically. All component modifications mapping to model violations automatically produce targeted regression tests hitting just the functionality and layers impacted. There is no need to maintain vast inventories of end-to-end UI tests.

By driving testing left into the modelling/design stage, you avoid unnecessary work implementing and testing flawed code. Tests are generated directly from the model representing the correct system behaviour. Better yet, by linking tests to models, regression verification stays laser-focused on just components touched by each change.

Embrace modular test architectures

To fully realize isolated test blasts radiuses, your test architecture should match the decoupled components comprising your software architecture. Tests should run wherever the components live, with no complex integrated environments required.

API/contract tests don't need heavyweight service environments; they can execute against component processes or stubs. UI component tests run within browser contexts without backends. Backend components have tests running in disposable environments torn down after execution.

The key principle is keeping tests modular enough to run wherever the component lives. Centralizing tests enables reuse within a shared pipeline. However, the component owner executes tests in isolated contexts representing their blast radius without taking downstream dependencies into account.

Breaking this modular model by requiring integrated environments destroys testing velocity. If UI tests depend on production data or backends need updating before each execution, you've immediately lost testing agility and fast feedback.

Design test architectures matching decoupled software components. Focus execution on isolated components impacted by changes. Integrate only to the degree necessary to validate intended end-to-end behaviour. This approach maximizes reuse and fast feedback while recognizing that over-integrating introduces brittleness and overhead that ruins test productivity.

Continuously validate everything

So far, we've explored isolating blast radiuses within conventional testing activities like unit tests, API tests, and UI automation. However, comprehensive quality assurance extends beyond just business logic to incorporate infrastructure, data, security, and compliance validation.

Modern delivery pipelines should incorporate diverse tests spanning the full gamut of system concerns, not just functional tests. Security tests analyse code and artifacts for vulnerabilities. Policy tests verify infrastructure provisioned correctly. Data and database tests ensure data integrity. Contract tests validate API contracts between components.

Testing isn't complete until you verify everything comprising a deliverable, from application logic down to images, networking, and compliance rules. You should automate different test types across a software factory pipeline to provide a holistic series of quality gates signing off changes before they reach production.

Of course, the key principle still applies - keep tests isolated enough to target just what changed. Failing to isolate blast radiuses by running every security, policy, and data test even for minor code tweaks quickly accumulates onerous execution overhead.

The ideal pipeline recognizes architectural boundaries to run only tests associated with the specific components impacted by a change. It executes the minimal test suite spanning functional requirements and non-functional concerns. Parallelized runners and optimized pipelines keep execution times low despite including tests addressing every potential risk.

Moving from "Big Test" to isolated quality streams

To summarize, isolating blast radiuses is about consciously designing leaner test suites and architectures avoiding needless over-testing. Don't blindly pummel entire systems to validate constrained changes. Focus quality efforts on just the layers, components, policies, and data impacted by each increment. Design iterates on the system model to validate behaviour through fast component tests without expensive integrated environments.

This philosophy is a significant departure from prevailing big testing mindsets that throw the kitchen sink of regressions at each build regardless of change scope. Most organizations expend immense effort automating open-ended end-to-end tests building bigger regression monsters with each release cycle. It becomes increasingly fragile and expensive to maintain bloated automation while still missing key risks.

Isolating blast radiuses flips this model on its head. Tests comprise focused quality streams validating just enough to provide fast feedback on prioritized change areas. Sleek pipelines execute only the component tests affected by a given change rather than massive regression inventories. A consumable model defines the actual system captured through targeted tests without redundancy. Comprehensive automation covering application, infrastructure, and security concerns still exists, but runs intelligently at the right architectural layer.

Collaboration and Flow - Delivery Times with Rich JordanAdmittedly, this lean testing approach requires more thoughtful test architecture and pipeline design up front. Teams can't just churn out endless test automation on autopilot as most teams do today. However, the payoffs are tremendous - orders of magnitude performance gains, cost reductions, meaningful quality insights, less environment overhead, faster feedback, fewer escaped defects, less rework, and more rapid releases.

Trust your testing instincts

Many teams intrinsically understand they need to isolate blast radiuses and avoid over-testing, but they lack examples from prevailing internal thought leaders endorsing this counterintuitive stance. The testing world remains infatuated with achieving higher percentages of automated tests without keeping things properly lean and focused.

Curiosity are here to help you diagnose and solve your enterprise’s quality challenges. Book a meeting or contact us to talk through the primary challenges that you are trying to solve in your software delivery. 

Transform your software delivery

Curiosity are here to help you diagnose and solve your enterprise’s quality challenges. Book a meeting with to learn how we can help you deliver quality.

Book a Meeting

Related articles

Align rapid releases to quality outcomes

Focus every stakeholder, requirement and test on measurable software quality

Talk To Us

Curiosity Software Platform Overview Footer Image Curiosity Software Platform Overview Footer Image