Essential QA Metrics For Teams

·

9 min read

QA metrics are measurable values that help track and evaluate the performance of an application, evaluate team efforts, the effectiveness of the testing process and testing outcomes throughout the development lifecycle.

Metrics enable QA managers ensure quality at every stage by identifying bottlenecks, optimizing the test coverage and making data driven decisions. They also help with efficient resource allocation to make informed planning for subsequent testing phases to improve the process and analyse cost.
Testers can use metrics to adjust test automation efforts, improve test scripts and see to that the testing efforts align with development goals.

Different teams measure various aspects depending on their specific goals.

Metric vs KPIs

“How good is your software?”

To answer this question, we should be able to provide quantifiable data to validate the quality of software, rather than stating that the software is good, customers are happy, or tests are passing.

Metrics provide insights such as the number of production down scenarios for customers, mean time for failure and SOAP test results provide data about reliability and can add more weightage to your answer.

KPIs (Key Performance Indicators) are specific metrics aligned with business objectives that are used to measure performance in key areas. For example, the number of deals booked for the product. KPIs are crucial for driving improvements in software quality and ensuring alignment with business goals Eg. Test coverage, defect density, and resolution times give a clear view of how testing efforts align with broader objectives.

OKRs (Objectives and Key Results) are transparent goals that can be reviewed periodically. Unlike KPIs that are often specific to business processes, OKRs are broader targets. For example, an OKR would be a target of $100 million in revenue or a 50% CSAT score.

QA Benchmarking

A QA benchmark is a standard or a reference point for assessing the performance of the product or the testing activity. This involves comparing the metrics and results of current projects and efforts with established benchmarks or industry best practices. Through this comparison, organizations can set realistic and achievable quality targets that align with industry standards or best practices.

Why QA Metrics?

  • To track the bug status. QA metrics include details like the number of bugs found/fixed/reopened/closed/deferred and identify which ones are critical

  • To inspect and verify issues effectively

  • Identify areas that need improvement or processes that require enhancement and refine testing strategies to optimize the performance

  • Help predict potential risks to address them proactively

  • To plan better by understanding trends and outcomes from metrics and thereby making data driven decisions

Absolute Metrics and Derivative Metrics

Base / Absolute Metrics

As the name suggests, these are numbers from actual data (number of tests passed, failed, etc) throughout test case development and execution and give a quick overview of the current state.

Derivate Metrics

Absolute metrics are used in conjunction with the Derivative metrics for more detailed insights into the testing process such as identifying trends, correlations, or areas for improvement. For example, analyzing Defect Density or defects per test case will give insights into recurring issues or patterns of test failures. These metrics allow testing teams to dive deeper into issues that may be affecting the quality and speed of the testing pipeline and the overall process.
Focusing solely on an absolute metric like the number of tests passed without considering a derivative metric like defect severity or test coverage wouldn't be of help.

Choosing the right combination of metrics is important to ensure that the derivative metric accurately reflects the quality of the test application process.

Metrics for QA Managers

It is important to pick metrics that strike the right balance of resources like people, processes and tools to understand customer expectations and match them accordingly. The strategy should be aligned on measuring change and communicating how the overall testing processes evolve overtime to effectively communicate with the team and make data backed decisions.

Metrics primarily help analyze and measure:

  • test duration and team efforts

  • bug count, severity of the bugs

  • effectiveness of the test cases

  • test execution progress

  • stability of test environments

  • overall quality of the software

  • time to ship, expectation on release

  • cost involved

How to pick the right metrics?

How do we decide what metric is useful?

This depends on a lot of things such as project demands, goals (internal such as test case or test execution related, and external such as customer satisfaction, a business outcome).

We can classify these metrics into two categories:

  • Lagging indicators
    Lagging indicators evaluate past performance based on results like revenue based on the product or feature success, customer satisfaction scores and churn rate. These are helpful to understand how well the testing activity paid off.

  • Leading indicators Leading indicators guide managers before issue arises, like predicting the future performance or any risks. Leading metrics guide products forward successfully.

Eg, open bugs, pass percent, build failures, MTTR, code coverage etc.

  • Vanity Metrics Vanity metrics or feel-good metrics such as the number of bugs logged, number of test cases etc.

Arriving at a metric for the project

Looking at a popular model like The GQM (Goal Question Metric) framework is a good place to start. These metrics start with a Goal in mind, Questions to reach the Goal and the metric to arrive at the outcome.

[[*Source: eecs.qmul.ac.uk/~norman/papers/qa_metrics_a..

Some questions could be:

  • Are there particular types of defects that are consistently missing?

  • What are the key functionalities and features that need to be tested?

  • Are there any high-risk areas or critical components that require more focus?

Now, let's look at some important metrics that help managers and QA teams.

Test Monitoring and Efficiency metrics

This metric helps QA teams gauge their testing efforts by showing how effectively their tests find and address bugs. It provides insights into areas needing improvement, helps prioritize where to enhance test coverage.

The following section discusses the metrics that correspond to test efficiency.

1. Percentage of Passed test cases (Tests Passed / Total Tests executed*100%)

Similarly, % of failed test cases, % of blocked test case percentage.

These are used in conjunction with Defect KPIs such as Average resolution, Time taken by devs to fix defects, % of critical defects, % of deferred defects, % of defects rejected, % accepted, % fixed defects etc.

Test Effectiveness

That shines light on how much value the tests provide to the quality.

Defect Detection Effectiveness (DDE) or percentage is the overall effectiveness of regression testing which is calculated by comparing the number of defects found before and after software release to customers. Though the goal is to have a test effectiveness percentage as 100% which implies that all defects were found before going live which is hypothetical.

(Bugs found in Testing stage / Bugs found in Testing + Bugs found after Release to customers) * 100

The higher the DDE measure the better. A good way to ensure this is to keep track of metrics like the defect discovery rate and time-to-fix.

Defect Discovery

The Mean Time To Detect Defects (MTTD) or Mean Time To Resolution (MTTR) help QA managers plan future projects and measure progress.

MTTD = Number of defects detected / Total time to detect all defects​

Mean Time to Repair (MTTR) measures the average time to resolve a defect once it has been identified. It explains the efficiency of the defect resolution process.

MTTR = Number of Repairs / Total Downtime

Team Metrics

QA managers could use team metrics to check if any team member requires help with a testing process or project knowledge. However, this metric is only for information and not to point fingers.

QA Team metrics could include:

  • Defects reported, accepted, rejected per team member

  • Distribution of open defects for retest per test team member

  • Test cases allocated, per test team member

  • Test cases executed, per test team member

Defect Distribution

Defects can be categorized based on:

  • Defect root cause like coding errors, design flaws, or system configuration issues.

  • By feature/module

  • Severity, Priority

  • By type - functionality issues, usability issues, or performance issues

  • By tester-whether issue was discovered by development testers, QA testers, UAT testers, or end-users

  • Test type / testing activity that identified the issue, like code review, walkthrough, test execution, exploratory testing, and others

  • By platform/environment (code errors, architecture, design, etc)

Test Coverage

Test Coverage is a critical indicator of quality, reflecting the thoroughness of a test plan. It measures the extent to which the software application has been tested. Examples of test coverage metrics include:

  • Test cases per requirement

  • Defects per requirement

  • Defects per set of requirements

Let's look at the test coverage metrics in detail:

Total Execution Percentage = Number of tests executed / Total number of tests to be executed * 100

This gives us an idea of the total tests executed compared to the test runs that are outstanding.

  1. Requirements Coverage

Requirements Coverage = (Number of Requirements covered / Total number of Requirements) X 100 and Requirements Coverage / unit = Passed test cases for each requirement

  1. Functional Coverage

Functional coverage checks if the key functions of the application were tested by the test suite. It ensures that each function was invoked by the running tests.

Functional Coverage = (Number of Functional Requirements invoked by Test Plan / Total Functional Requirements) * 100

  1. Product Coverage

This metric shows how many different products the app has been tested on.

For example, a web app being tested across a range of devices and operating systems to make sure it works well everywhere.

  1. Risk Coverage

It identifies the potential risks that a software application might impact the user experience. When tests are crafted to ensure the software remains functional and effective in such scenarios.

One scenario would be when a third party API that handles important transactions becomes unresponsive.

Cost

It is a measure of the actual cost of testing compared to allocated cost. This would include people, infrastructure, and technologies.

Cost of testing: (Total defect resolution time x Hourly cost of a dev) + (total replay time x Hourly cost of a tester)

Multiple factors contribute to the cost - in terms of automation, it could be script development time, maintenance and analysis.

In addition to these metrics we discussed, Defect Density and Defect Age are also indicators for QA managers on their team, testing efforts.

Defect Density

Defect density = number of defects found / by the software size (in terms of lines of code, function points, or modules) for a given time.

If a software product has 10 defects and 1,000 lines of code, the defect density is 0.01 defects per line of code, that is 1 defect per 100 lines of code.

Defect Age

Defect Age is calculated as the difference between the time a defect is fixed and the time it was discovered. A lower defect age indicates that bugs are being resolved more quickly within each test cycle.

We’ve covered some QA metrics that help track how well the software tests are performing. With these insights, managers will be better equipped to ensure that the testing process and efforts are directed towards the right goals and that the software is of good quality.

***