Using Function Point Analysis and model-based testing to objectively measure services.
A perpetual challenge in managing software testing projects is gauging whether testing efforts are efficient, valuable and effective enough. Testing can consume 25-50% of total software delivery project costs, so optimizing the value delivered is crucial. Yet, efficiency and effectiveness have proven difficult to objectively quantify in testing contracts and execution.
This deficiency stems from the reliance on subjective metrics, like the number of test cases covered or defects found. Effective measurement and optimisation of services, by contrast, requires quantifiable size and complexity analysis.
Adopting function point analysis and model-based testing could offer an objective solution.
The subjectivity of testing efficiency
Efficiency determines how quickly testing generates outcomes relative to the time and resources invested. Managers often assume that specifying tester hours in a contract provides efficiency accountability. Yet, raw hours reveal little about actual throughput.
Ten senior tester hours should provide more coverage than ten junior tester hours. Skill levels, existing test assets, and tooling all further sway efficiency. Hours get booked regardless of quality or progress. Even utilising story points or planned test cases offer limited insights into efficiency. Velocity varies across test case complexity, which subjective story point estimates fail to capture.
With so many variables obfuscating internal efficiency, managers typically resort to crude outcomes. They track metrics like the total number of test cases executed or defects found. However, these outcomes reveal nothing about the efficiency of the testing process itself. Testing could follow wasteful paths yet still log cases and bugs. These results-oriented metrics incentivize the wrong behaviours.
The pitfalls of measuring test effectiveness
Effectiveness determines if testing delivers business value and provides sufficient risk mitigation. However, effectiveness has traditionally proven even more nebulous to quantify than efficiency.
Test coverage and defect counts provide one-dimensional metrics of effectiveness at best. High test case coverage does not guarantee that meaningful scenarios have been exercised. More critically, a zero-defect output could wrongly suggest the system was rigorously tested, even if large swaths have been missed.
Testers themselves struggle to articulate objective criteria for effectiveness. Exploratory testing provides high value yet follows unpredictable paths. Defining completion requires subjective judgment calls on residual risk. This leaves managers wanting tangible evidence of effectiveness.
Asking stakeholders directly if they “feel” testing was effective introduces bias and unfair accountability. People often equate effectiveness with liking the testers or delivery pace, rather than objective technical insights. Yet unlike development and design, few measurable artefacts emerge from testing to judge independently.
The perils of outsourced testing contracts
The subjectivity around efficiency and effectiveness is amplified when testing gets outsourced. Without visibility into internal testing processes, managers craft contracts around outputs to create supplier accountability.
Contracts specify quantities like the number of test cases to design, hours to execute, or a maximum number of defects to find. While understandable, this incentivizes suppliers to game numbers, instead of delivering rigorous testing. For example, testers may author questionable cases to meet volume requirements or cap defect finds to avoid penalties.
Even well-intentioned suppliers struggle to convey the nuances of their expertise into contracted metrics. Searching for definitive requirements to prevent scope creep, managers end up with prescriptive contracts biased towards activity over aligning to testing needs. All the while, subjectivity still hinders determining true efficiency and effectiveness.
Quantifying scope and complexity through FPA and model-based testing
To inject objectivity into managing testing efficiency and effectiveness, testing contracts and execution should shift from subjective metrics to quantifying scope size and complexity. This enables comparative analysis across projects, calibrating optimal rates.
Function point analysis (FPA) offers standardized sizing of the functional scope to be tested, independent of technology choices. By quantifying size through weighted input, output, interaction, file, and interface components, FPA provides the "what" to test objectively.
Model-based testing (MBT) then addresses the "how" by generating test cases from system models:
A flowchart provides a clear representation of the “LBW” rule in cricket, and auto-generates paths (“tests”) through the logic.
Models codify complex real-world behaviour in a manner that’s more accurate than alternative forms of specification. MBT reveals testing needs unbiased by individual testers’ skills. Automated test case generation also improves execution efficiency predictably.
FPA and MBT integrate to provide:
-
Test sizing baselines - FPA sizes functionality to test for apples-to-apples comparisons across projects.
-
Test scoping guides – MBT’s models highlight testing gaps not covered in FPA sizing.
-
Test efficiency rates – Comparing FPA sizes to MBT test cases executed over time provide throughput metrics.
-
Test coverage transparency - MBT tests tie back to functional areas, showing coverage comprehensively.
-
Continuous efficiency monitoring/alignment - MBT auto-generates new test cases as models update, keeping testing in sync efficiently.
These quantified artefacts enable objective testing efficiency and effectiveness tracking. Managers gain data to calibrate optimal testing processes and contractor performance, grounded in evidence versus subjective judgment calls.
Objectivity to navigate complexity
Suboptimal software testing has broad consequences downstream, from delayed releases to reputational damage and high-profile defects. Poor testing burns significant time and budget, yet organisations often lacks the metrics to course correct.
By quantifying test scope, complexity, execution, and coverage through FPA and MBT, teams finally gain the objective insights needed to master efficiency and effectiveness. Testing transforms from a black box to an optimized, value-add driver of quality and business outcomes.
Speak to a Curiosity expert today to get started with your model-based testing journey!