Over the last three decades the technology evolution has pushed the boundaries of traditional industries and also sparked new ones. Cybersecurity — whether viewed as a sub-faction of information security, a set of practices within the information technology industry, or a significant industry in its own right — has emerged and continues to evolve to aid in the management of risk that technology presents to all aspects of our daily lives. Measuring progress through cybersecurity metrics has become a critical component.
During this evolution, there have been many changes and new methods for evaluating progress and effectiveness of efforts to implement and maintain cybersecurity practices. Unfortunately, challenges for measuring these efforts continue while success remains elusive for many. Cybersecurity, information security, and technology risk metrics remain contentious topics with pockets of interesting outcomes, but often, organizations struggle to effectively establish useful cybersecurity metrics.
In some cases, I think some metrics dogma has caused us to lose track of what we are setting out to do. Ultimately, where metrics should be a tool to help us communicate, inform our strategies, and make decisions, we often create more confusion with our attempts to define, create, and utilize metrics. In this article, I will explore some of the common pitfalls of cybersecurity metrics.
The Language of Cybersecurity Metrics: What We Have Here Is a Failure to Communicate
When used properly, security metrics communicate the status and trajectory for our security practices with stakeholders — where we are and what’s needed to achieve cybersecurity maturity and impact targets. Senior stakeholders especially require relevant cybersecurity KPIs to assess the security program. However, we need to speak the language of our target audience, or breakdowns occur.
The language of cybersecurity metrics may need translation to reach audience members. Never assume the audience knows what you are saying or the terms that you are using without first providing a dictionary or explanation. Non-technical audiences are unlikely to understand technical jargon or acronyms familiar to cybersecurity practitioners. In fact, while acronyms save character space in documents, they can cause confusion. At a minimum, write them out the first time and define their full meaning to ensure that your audience is on the same page. Better yet, don’t use them at all. Remember that attention spans are short, and audiences may scan the metrics, so consider writing them out each time.
Avoiding Common Cybersecurity Metrics Pitfalls: The Good, the Bad, and the Incoherent
Reliable security performance indicators can help inform decisions for cybersecurity investment or security program resource allocation for driving strategic actions. When tracked over time, security metrics can surface trends and provide information about the effectiveness of previous decisions, proving far more useful than snapshots or one-time counts of discrete activities, which can lead to misguided decisions. Furthermore, benchmarking metrics is useful as the security community builds out more effective best practices.
Over the years I’ve heard a lot about what comprises good and bad metrics. Here are a few of the descriptors that I hear a lot—and related potential pitfalls.
“Good security metrics are easy to produce.”
I disagree with this statement because it can lead us to produce more bad metrics than good ones. Easily produced security metrics are often spewed out of our tools as counts or numbers and don’t really help us communicate. Cheap and easy to gather is not the right characteristic for a good metric.
On the contrary, I have found that metrics that provide a more complete picture can be elusive and more difficult to derive. Valuable metrics may require extending effort and devoting energy to producing them with accuracy and frequency. However, if they help to determine how our security programs are achieving objectives, the cost and effort to operationalize producing them is a necessary investment. Producing meaningful metrics must be factored into the cost of operations, although ideally, over time, efforts can be devoted to making them easier to compile reliably.
“Counting things is not a metric.”
Fact check — true. The number of things counted is not a metric, but unfortunately, such random data points are what many security tools are able to easily produce. In isolation, the number of events, alerts, or vulnerabilities, for example, won’t help us understand if we are achieving cybersecurity KPIs. Gathered over time, such counts may start to paint a picture of a trend, but discrete numbers are only part of a statistic — a numerator lacking a denominator in an equation. While necessary to assemble in service of creating useful metrics, counting things is the easy part—and an easy place to stop if we haven’t adequately raised the question that we are trying to answer, the decision that we are seeking to make, or considered the conversations that we want to inform with our metrics.
“Metrics should be actionable.”
I often see statements like: “If metrics fail to drive decisions, then they are useless” and “Metrics should result in a behavior change.” While I don’t disagree with the general premise of these statements, I believe that they are aspirations for your security program metrics. Starting in the place of generating metrics to drive behaviors and force decisions has the potential of focusing on metrics that result in outcomes you desire versus what the business goals dictate. It’s important to guard against bias when defining the metrics you use.
Work with your business counterparts and organization leadership to align security program goals with overall business goals and then focus on metrics that help measure security program performance towards these goals. Decision support is a fantastic objective of the metrics you define and utilize. Cybersecurity metrics should help measure performance and identify trends and patterns, but most importantly, they should help the organization determine where they should apply precious resources as they manage technology risks associated with business objectives.
“Metrics should be isolated to things that you can control.”
The concept of metrics being actionable limits them to the things that we believe to be in our control, leading to actions that we can direct or take. However, if you’ve worked in cybersecurity for any amount of time, you realize there’s much we can’t control. Measuring things that fall outside of our span of responsibility or control can complicate assembling and producing metrics. What it really says is that we have a lot of interdependent activities that are difficult to specifically assign, requiring cross-functional teamwork and coordination. The information we need to form our metrics may need to come from other parts of the organization, external entities, suppliers, or partners. While we may be able to influence these entities, we may not be able to control or direct them. The data our metrics provide is very useful to support our case when we seek to influence change; however, we should consider measuring security program execution as broadly as our cybersecurity risks extend.
Read My Blog Post on Creating Meaningful Cybersecurity Metrics
Effective cybersecurity metrics and KPIs play vital roles in communicating security program impacts to stakeholders, measuring program maturity, and helping guide program decisions. Therefore, it makes sense to ensure your metrics provide the insights you need—and avoid common pitfalls and misplaced assumptions. But how do you go about creating meaningful security metrics? Find out in this blog post: 7 Tips for Creating Meaningful Cybersecurity Metrics.
Meanwhile, if you’re looking for support in building and measuring your security program, consider learning more about our robust Security Breach Protection program and complimentary Cybersecurity Workshop.