Every year organizations around the world spend more and more money in acquiring the latest security technologies and products. Yet, very few organizations manage to get a reassuring feeling about their current risk exposure and the benefits from the investment made. Senior management wants to know if the investments made paid off and information security managers want to justify the choices made and for which they are accountable, while at the same time getting the professional satisfaction of a job well done.

There is only way to answer all those questions and that is by developing a range of security metrics that would somehow help us measure security. Unfortunately that is far easier said than done and the development of security metrics is an area of extreme deficiency in many organizations.

The first step in “cracking” the security metrics nut it to define what a metric is. One would think that being such an important issue in today’s information security world, there would be plenty of definitions out there. Unfortunately, there is not a good definition and in order to understand what a metric is one has to first digest quite a bit of reading. The first and focal publication on the subject is the book by Andrew Jaquith from 2007, titled “Security Metrics: Replacing Fear, Uncertainty, and Doubt”. As of today, that is still the on of the best books on the subject. Paraphrasing the definition of metric from Wikipedia, we can define a metric as follows:

“A metric is a system of attributes or parameters and documented methods of assigning values to them (obtaining a measure) and interpreting them so that they can provide meaningful and usable information about an object of measurement.”

The best way to visualize a metric, based on the above definition, is to use a metric template, i.e. a sample empty metric. Examples are available from sources such as NIST and ISO. The following metric template helps visualize a typical metric is and the associated complexity in developing one. The example is not exhaustive and in real life, other parameters need to be defined. In a real scenario, developing metrics is part of a wider Security Metrics Development Programme, which is to ensure that the metrics are aligned to the company’s strategic and information security goals and have clear roles and responsibilities.



Metric ID

A unique identifier of the metric


Statement of the specific information security goal that the measurement will help assess. In other words, what information do we intend to get out of the measurement? How will such information help understand if we are getting closer or further from our goals?

Object of Measurement

What are we measuring? (e.g. a security control or a process)

Data Source

Do we have the data we will use for the measurement? Is that data easily available, consistent and reliable? Think about trying to measure an incident management process without having a ticketing system to collect information about the overall process.


We can measure different aspects. For instance, with regards to a process such as the incident management, we can measure its level of implementation, its effectiveness/efficiency and finally its impact to the business. Obviously, we cannot measure a control’s effectiveness until it is actually implemented!


Statement of the actual measurement such as “Percentage of critical incidents per department”


The actual calculation to be performed that results in a numeric expression of a metric. Example (Num of Critical Incidents/Total Num of incidents). It is important to keep the calculation simple in order to ensure it will be carried out regularly


Threshold for a satisfactory rating for the measure, such as milestone completion or a statistical measure.

Collection Frequency

Indication of how often the data is collected and analyzed

Reporting Frequency

Indication of how often the data is reported.

Reporting Format

Indication of how the measure will be reported



The person who will review the measurement to validate the results before they are communicated to the “Client” of the measurement.

Client of measurement

Management or other interested parties that will be the recipient of the information generated from the measurement. It is important their reaction is not the proverbial “so what?”

When trying to develop security metrics, many organizations start with metrics that sound good on paper but are more like a wish list. A typical example is an organization trying to measure the cost of security incidents. That is a great metric but a quick look at our sample metric template will tell us that the organization is probably not in a position to have such a metric if the ticketing system has only been deployed recently, there is no baseline data, the reliability of the data is weak since the process is not mature yet roles and responsibilities are not yet fully embraced etc.

Another bad practice is to forget about the “client of the measurement” and to develop metrics that have absolutely no meaning or value to those whom the measurement is supposedly meant for!

Developing security metrics needs to follow a maturity model and organizations must realize that certain things cannot be measured at the beginning. Like in everything else in life, there are no shortcuts. In time, data will be become more reliable, more easily collectable through forms of automation. Ultimately, measurements themselves will become easier and more meaningful.

Learning about developing a security metrics programme is not easy task. For good books about the metric themselves I would recommend the following books:

  • Andrew Jaquith (2007) – Security Metrics: Replacing Fear, Uncertainty, and Doubt
  • Lance Hayden (2010) – IT Security Metrics: A Practical Framework for Measuring Security & Protecting Data
  • Caroline Wong (2011) – Security Metrics, A Beginner's Guide
  • George Campbell (2014) – Measures and Metrics in Corporate Security

For a higher-level understanding of the metrics programme itself, a good start is the NIST publication SP800-55 Revision 1 (2008) “Performance Measurement Guide for Information Security”. Despite its bias towards the US environment, it is by far much clearer than the far more detailed and perhaps a tiny bit too academically-presented ISO27004. The latter however must be reckoned with for its Plan Do Check Act approach.