Making Cybersecurity Legible: Metrics That Actually Communicate Value

Jon Pertwee · 2024

 

One of the most persistent challenges in cybersecurity is communicating its value to the people who control budgets. If the security function is working well, nothing visible happens. No breaches, no outages, no incidents. And in that silence, it becomes easy for senior management to wonder whether the investment is necessary at all. 

The result is a familiar pattern: budgets erode during quiet periods, security posture weakens, something goes wrong, budgets are restored. Governance by incident rather than by design, which is the same structural failure that appears in IT governance more broadly.

The answer is not to wait for incidents to make the case. It is to find metrics that translate the value of cybersecurity into language that senior management can understand and engage with, ideally in financial terms. The following are metrics and arguments I have found useful in practice.

 

Security Measures Can Improve Productivity, Not Just Reduce Risk

The standard complaint about security from end users is that it slows them down. Additional authentication steps, password complexity requirements, restrictions on what can be accessed from where. From a user’s perspective, security is friction.

This framing is often accurate but not inevitable. Security tools implemented thoughtfully can reduce friction rather than add it, and when they do, that productivity gain is measurable and presentable to a board.

Consider hardware security keys that are compliant with the FIDO Alliance standard. A user logging into three applications per day, saving thirty seconds per login compared to a traditional password process, saves ninety seconds daily. Across twenty working days a month and twelve months, that is 360 minutes per year, or six hours. At a notional cost of $50 per hour, that is $300 of recovered productivity per employee per year.

A pair of FIDO-compliant security keys (one primary, one backup against loss) costs approximately $100. Deployed across 500 employees, the total hardware cost is $50,000. The annual productivity recovery across those 500 employees is $150,000. The investment pays for itself in the first year, and in subsequent years generates a net positive return, while simultaneously reducing the risk of phishing-based credential compromise.

This kind of calculation is more persuasive to a board than any risk narrative, because it puts cybersecurity in the same analytical frame that every other business investment occupies. The question is not ‘how much does security cost?’ but ‘what is the return on this security investment?’ Those are very different conversations.

 

Employees Are the First Line of Defence, Not the Last

Security awareness training is routinely described as the ‘last line of defence’ against human-targeted attacks. This framing should be challenged. Employees are not a fall-back control activated when everything else has failed. They are the first point of contact with the majority of attack vectors that actually succeed: phishing emails, social engineering, inadvertent data disclosure, weak credential practices.

A culture of security awareness directly reduces the frequency and severity of these incidents. That reduction is measurable. Organisations that track phishing simulation click rates, credential reuse incidents, security training completion and assessment scores, and incident reports originating from employee alerts can demonstrate improvement over time and connect that improvement to a reduction in risk exposure.

Presented to a board as a trajectory rather than a point-in-time snapshot, security culture metrics make visible something that is otherwise invisible: the gradual shift from a workforce that represents a significant attack surface to one that actively contributes to the organisation’s security posture. The cost of training programmes, when compared against the average cost of a breach that employee behaviour either prevents or enables, is straightforward to justify.

 

Reducing the Time Between Breach and Containment

According to the IBM Cost of a Data Breach Report 2023, the average time between a breach occurring and its detection was 204 days. From detection to containment, a further 73 days. That is approximately nine months during which a threat actor may have ongoing access to systems and data.

The same report notes that 62% of the total cost of a breach is incurred in the detection and escalation phases. Security tools that reduce dwell time: the period between initial compromise and containment, therefore have a direct and quantifiable impact on breach costs.

Security Information and Event Management (SIEM) systems that integrate with other controls, automating detection and escalation rather than relying on manual review, compress this timeline significantly. The metric here is mean time to detect (MTTD) and mean time to respond (MTTR), tracked over time and presented alongside the financial implications of the current baseline versus an improved target state.

The IBM report is worth citing directly in board presentations. It provides independently verified cost data that removes the need to argue abstractly about risk. See: IBM Cost of a Data Breach Report 2023, p.14.

 

Compliance Costs Less Than Non-Compliance

Regulatory fines for data protection failures are large enough that the cost of compliance is almost always lower than the cost of a single significant breach. (This is deliberate.) The EU GDPR sets maximum fines at €20 million or 4% of annual global turnover, whichever is greater. PCI DSS penalties include $100,000 per month in fines, fees per replacement card, and investigation and audit costs.

For a retailer processing a million credit card transactions per year, the replacement card fee alone in a major breach could reach tens of millions of dollars. Mapped against the cost of achieving and maintaining PCI DSS compliance, the investment case is not difficult to make.

Beyond mandatory compliance, voluntary alignment with standards such as ISO 27001 carries commercial value. Certification is increasingly a prerequisite for enterprise procurement, particularly in sectors where clients conduct supplier due diligence on security posture. Presenting compliance as a revenue-enabling investment rather than a cost centre changes the framing for a board audience.

A useful metric here is compliance progress tracked as a journey: the number of audit findings closed, risk register items addressed, and controls implemented, presented as a percentage of completion and updated quarterly. This gives senior management visibility into trajectory rather than just current state, and connects security investment to specific risk reduction outcomes.

 

Dark Web Monitoring as an Early Warning System

Credential monitoring on dark web databases provides an early warning of potential compromise that would otherwise go undetected for months. When a user’s credentials appear in a dark web database, it indicates one of two things: either the user has reused credentials on a third-party site that has been breached, or the organisation itself has been compromised.

A single account appearing in a credential database warrants investigation of that account’s activity within the network. Multiple accounts appearing simultaneously may indicate an active breach requiring immediate response. In both cases, detection through dark web monitoring compresses the timeline significantly relative to the 204-day average detection window.

Threat actors who obtain credentials need to monetise their access quickly before it is revoked. An organisation that detects compromised credentials within days rather than months is substantially better positioned to contain the damage. The metric is straightforward: mean time to detection for credential compromise events, tracked against the industry baseline.

 

Quantifying the Cost of Downtime: RTO, RPO, and RTA

Availability is one of the three pillars of the CIA triad (Confidentiality, Integrity, Availability), and it is the one most directly connected to financial metrics that boards understand. System downtime has a cost that can be calculated with reasonable precision, and that calculation is one of the most effective tools for securing investment in disaster recovery capabilities.

Three metrics matter here:

The Recovery Time Objective (RTO) defines how long the organisation can tolerate being offline before the business impact becomes unacceptable.

The Recovery Point Objective (RPO) defines how much data loss is acceptable, measured in time from the last backup to the point of failure. Together, these define the parameters within which the DR function is expected to operate.

The third metric, Recovery Time Actual (RTA), is often underused but highly valuable. RTA measures the actual elapsed time from the onset of a disruption to full recovery, including the time taken to locate personnel, assemble the response team, conduct initial assessments, and begin technical recovery activities. A DR plan that specifies a four-hour RTO but consistently delivers a twelve-hour RTA in exercises has a gap that the RTA metric makes visible.

Mean Time Between Failures (MTBF) rounds out the picture for board reporting. Improvement in MTBF over time demonstrates that security and resilience investments are reducing the frequency of failures, not just improving recovery from them. Presented alongside the per-hour cost of downtime, MTBF improvement has a calculable financial value that connects directly to the business case for continued investment.

 

Shadow IT: The Hidden Cost of Ungoverned Technology

Asset management, understanding precisely what devices, applications, and services the organisation uses and who is responsible for them, is a foundational element of both cybersecurity and cost management. It is also the lens through which shadow IT becomes visible.

Shadow IT refers to technology resources used within the organisation that sit outside IT department oversight: personal cloud storage used for business documents, unauthorised SaaS applications procured by individual departments, consumer applications handling data that should be in governed systems. Research, including work conducted as part of my own MSc studies, indicates that the majority of cloud applications in use within organisations fall into this category.

The business risks are significant. Intellectual property placed in ungoverned cloud applications sits outside the organisation’s data loss prevention controls. Credentials used in shadow IT applications are rarely subject to the same standards as governed systems. Compliance obligations relating to data residency, retention, and access control may be unknowingly violated.

The financial case for addressing shadow IT is straightforward. Shadow IT duplicates spending on solutions the organisation already pays for, introduces security costs through the compensatory controls needed to manage the associated risk, and creates compliance exposure. A KPI tracking the number of ungoverned IT solutions identified and brought under governance over time is both a security metric and a cost management metric, which makes it useful for exactly the kind of dual-purpose board reporting that cybersecurity benefits from.

 

Presenting Metrics: The Case for Visual Reporting

The metrics described above are only useful if they are communicated effectively. A CISO or IT security manager who presents a detailed technical report to a board will lose their audience before they make their point. The goal is to translate complex security data into something a senior leader can interpret in thirty seconds.

A dashboard that tracks a small number of key metrics over time, expressed wherever possible in financial terms, achieves this. Useful candidates include: mean time to detect and respond, employee security training completion and phishing simulation results, system uptime and RTA against RTO targets, compliance progress as a percentage, and the ratio of security investment to the quantified cost of incidents prevented or contained.

A rising line showing security posture improving over time, tied to a financial narrative about what that improvement is worth, is the most effective tool available for securing continued investment in the security function. The argument is not ‘trust us, threats are increasing.’ It is: ‘here is what we did last quarter, here is what it cost, and here is what it was worth.’

 

A Closing Note

The common thread across all of these metrics is translation. The security function understands risk in technical terms; the board understands risk in financial and strategic terms. The job of a cybersecurity metric is to bridge that gap without oversimplifying what it represents.

None of these metrics is complete on its own, and the right selection depends on the organisation, its risk profile, and what its senior leadership cares most about. If you are working on how to communicate your security programme’s value more effectively, or building a reporting framework for board-level consumption, feel free to contact me.