by Roger G. Johnston, Ph.D., CPP
Right Brain Sekurity
The traditional measure of security effectiveness—has anything really bad happened recently?—is not a very effective metric. It fails in at least 3 ways: It ignores changes in assets, resources, technologies, threats, and vulnerabilities. It does not encourage a proactive approach to security. It fails to prepare us for rare catastrophic incidents. This article discusses some unconventional metrics for security that might be worth considering, especially for complex security programs.
Any good security metric should have certain key attributes. It should measure actual security, not just security management. The important things should get measured, not just the things easy to measure. Quality, not just quantity needs to be emphasized. A good metric should also recognize that compliance and security are not the same thing. And all metrics should avoid driving undesirable employee behaviors and attitudes.
Some metrics worth considering:
- Degree of transparency. Somewhat counter-intuitively, security is usually better when it is transparent because this allows for review, criticism, questions, improvements, and buy-in. “Security by Obscurity” is a not a viable security strategy. People and organizations cannot keep long-term secrets, and you usually have to assume the adversaries (insiders or outsiders) understand your security.
- Amount of thoughtful pushback on auditors and high-level security rules to allow for local conditions. The key test for local security practice ought to be whether it is good security, not whether it follows the one-size-fits-all rules mandated by high-level bureaucrats with no understanding of the local environment. Pushback suggests there has been some local critical thinking about security.
- Frequency of sanity checks on security rules by talking with employees affected by them.
- Disgruntlement mitigation. Percentage of the time when managers and HR are aware of allegations of an unfair or hostile work environment (bully bosses, coercion, sexual or racial harassment, etc.) that they take positive actions, and do not retaliate against the alleged victims. While disgruntlement is only one of many motivators for inside attacks, it is one of the easiest to counter. A related metric is the number of times that the organization’s grievance and employee assistance programs are used. They will only be used frequently if employees view them as safe, effective, and legitimate. Perception is everything.
- Employee turnover rates for both security and non-security personnel. This is closely related to the insider threat.
- Frequency of formal and informal communication between security personnel (including low-level personnel) and non-security employees and contractors. Security by “walking around” is an effective strategy.
- Resiliency preparation. Prevention is difficult. A good security program needs to be ready in advance to lead recovery after a serious security incident, including tampering, hacking, and counterfeiting.
- Amount of “What Ifs?” How often do employees and security personnel mentally or physically rehearse possible security incidents, and how often are novel incidents considered? Even wildly implausible scenarios get people thinking creatively about security!
- Frequency of formal and informal vulnerability assessments, number of ongoing vulnerabilities and potential countermeasures identified, and number of suggestions for security improvements, including from low-level personnel and non-security employees. (It is important to realize that vulnerability assessments are not the same thing as threat assessments, security surveys, performance testing, “Red Teaming”, pen(etration) testing, or compliance auditing—though these things are worth doing and can shed some light on vulnerabilities.)
- Number of security changes recently introduced. This leads us to the idea of “Marginal Analysis”. (In mathematics and economics, “marginal” means rate of change.)
Now securing even a medium-sized enterprise or facility is a very complex minimization problem. Risk needs to be minimized while considering hundreds of different security parameters (variables) involving security personnel, technologies, spatial and temporal deployment of resources, possible security strategies, assets to be protected, threats, vulnerabilities, training, etc. This is very much like a classic minimization problem in N-dimensional space, where N is quite large.
Figure 1 shows a 3-dimensional schematic of risk plotted as a function of only 2 security parameters (so N=3). The risk surface has peaks and valleys. In theory, the goal is to find the values for the two security parameters that corresponds to the lowest valley in the risk surface; this is the point of minimum risk.
The idea with Marginal Analysis is to introduce changes—real or theoretical—in your security program, and then determine if the risk is lowered as a result. If it is, try additional similar changes to see if you can get the risk even lower. If the risk instead increases, try changes in approximately the opposite direction. The goal is to travel down the red line shown in figure 1 by adjusting parameters in an attempt to find the minimum in the risk surface, and more importantly, the values of the various parameters that gives this minimum risk.
We can conclude that we have “pretty good security” if no changes significantly lower the risk. This may be more practical than an absolute determination of security effectiveness.
Somewhat counter-intuitively, the changes in your security should involve variations in more than just 1 parameter at a time. The changes will usually be minor. Every once in a while, however, it is important to consider large changes. This is because you may currently be in a local valley in the risk surface. There might be a lower valley over the next hill or mountain, and a large change could allow you to locate this lower risk. (It is, however, important not to let the best be the enemy of the good. Often a good security solution is acceptable rather than demanding the absolute best, i.e., the absolute lowest valley.)
Now the risk surface in figure 1 is constantly morphing and fluctuating over time with changes in threats, assets, personnel, technologies, etc. So this process of introducing changes to see if the risk gets lowered is ongoing and not a one-time activity. It is also important to recognize that the mathematical N-dimensional minimization problem is only an analogy. The process is not really mathematical. We do not yet understand enough about security to do this complete process mathematically in any realistic way. Perhaps someday we will.
So how do you know if a given set of changes improves or degrades your security, i.e., lowers or raises the risk (and takes you “downhill” in N-dimensional space)? There are several possible answers:
Possible Answer 1: It doesn’t matter. The main goal of Marginal Analysis is to encourage change, flexibility, and critical thinking about your security. Whether the change is actually implemented or merely contemplated, a subjective estimate of whether there is security improvement or not may be adequate. Often, it is easier to judge incremental improvement in security than absolute effectiveness.
Possible Answer 2: For a more nuanced approach, the changes can be implemented for real, then the security system studied for evidence of improvement or degradation.
Possible Answer 3: Perhaps the best approach is to let vulnerability assessors, threat assessors, and risk analysts help you determine whether the change (implemented or contemplated) actually improves your security.
The ultimate question worth considering with Marginal Analysis is the following: Can continually focusing on changes help us be flexible, adaptable, and proactive about security, rather than being stuck with inertia, reactive approaches, wishful thinking, and groupthink? Give it a try!
About the Author
Roger G. Johnston, Ph.D, CPP is CEO and Chief Vulnerability Wrangler at Right Brain Sekurity, a company devoted to security consulting and vulnerability assessments. He previously was head of the Vulnerability Assessment Teams at Los Alamos and Argonne National Laboratories (1992-2007 and 2007-2015).