The NCSC's Dave Chismon published a blog last month with an unusually direct argument: most of the metrics that security operations centres report on are not just useless, they actively damage detection. No metrics, he says, are better than bad metrics.
The metrics that go wrong
Four perennials get called out:
- Tickets processed. Rewards volume. Encourages analysts to close alerts as false positives without proper investigation, because closing them earns the metric.
- Time to close tickets. Compounds the first problem. Speed beats accuracy. Genuine alerts get triaged out because the clock is the goal.
- Number of detection rules written. Optimises for rule count, not rule quality. The result is alert inflation: more noise, more burnout, no better detection.
- Volume of logs ingested. Optimises for storage, not signal. You end up paying to keep data that doesn't help you find anything.
Each one looks reasonable on a dashboard. Each one quietly pulls the team away from the thing the team is meant to do.
The one that counts
Chismon's bottom line: "There is only one metric that shows a SOC's efficacy: does it detect (and respond to) attacks in a timely manner?"
That's it. Time to detect, time to respond, validated by exercises that actually try to evade detection (red team work, purple team work, threat-led testing). Everything else is a supporting indicator, useful internally, dangerous as a board-level KPI because the moment a metric becomes a target, people game it.
Without the right measure, he warns, "a SOC is ineffective and the job is miserable, with analysts describing themselves as 'ticket monkeys.'"
Why this matters outside SOCs
Most UK SMBs don't run a SOC. They have an IT partner, an MSP, or a managed detection service that produces a monthly report. The same trap applies. If your security report is a count of tickets, alerts, and patches, it's measuring activity, not protection. The useful question to ask your provider is the one Chismon poses: when something real happens, will you see it, and how fast?
If they can answer that with evidence (test exercises, simulated phishing run-throughs, red team results, time-stamped detections during a real incident), good. If they answer with ticket counts, you've found the problem.
How Steelwise can help
Reviewing what your security provider actually reports, and whether those numbers tell you anything useful about how protected you are, is the kind of question we work through with clients. Get in touch.
Further reading
- NCSC: Could your choice of metrics be harming your SOC?
- Infosecurity Magazine: No metrics are better than bad metrics in the SOC, says NCSC