Blog

Somewhat disruptive articles dissecting trends, strategy, and evolution of the cybersecurity space.

Cognitive biases in CyberSecurity

First encounter: Cognitive dissonance

I faced an interesting bias early in my career. Back in the 90's, when firewalls where still some kind of novel (if not suspicious) type of component to integrate in the infrastructure, I often heard that "hackers" used to get the list of companies acquiring such solution and focused their attacks on them.

The point was that after deploying a firewall, you start to get logs, thus visibility on what happens. Not that "hackers" suddenly decide to attack a protected network (while thousands were left unprotected by the way). The reality is that once the firewall was deployed, it became possible to observe facts we somewhat suspected: we are not alone on the Internet.

And it can be scary. Especially when you have no understanding of the real risk and impact behind a red line on your management console detailing a packet targeting port 23 (remember, we are in the 90's) has been rejected.

Two options remained: one could either choose to accept the reality of a digital world getting more and more dangerous, or hide behind an irrational belief. A blue pill / red pill dilemma, also known as Cognitive dissonance.

No information is information: the Survivorship bias

Context

During World War 2 the allies performed some analysis on the bombers returning from raids over Germany, especially looking at the shells impact location. Based on these data they increased the protection of the most impacted zones, but didn't manage to raise the survival rate.

The bias lays in the misinterpretation of the data. If bombers managed to get back, it means the zones riddled with shells were resistant enough. On the other hand, if no plane came back with any impact on a specific location, it clearly highlighted a weak spot to reinforce.

Although it is not sure that the mistake was really made, the concept of the bias remained, and can easily be applied to IT security in many ways.

Discovering blind spots

One of the challenges today is to ensure the attack surface is properly secured. Security solutions verbosity can sometimes be blamed, but a second order analysis can reveal which component, network or application (wherever they are deployed) is not properly protected, or at least supervised. Indeed, not receiving any security log or telemetry data doesn't mean it is not attacked, but simply that it is not protected.

Delving deeper into the analysis, an inconsistent volume of information received from similar components, deployed in a similar context, is usually good proof that some security is not enabled, or at least not properly.

Dysfunction early alerting

I used anti-spam as a supervision tool for my mail server (when you had to host your own one, a few years - decades - ago). When I stopped receiving spam alerts, it meant that the mail server was down.

The same can be applied to security components. If the volume of information received suddenly drops for a whole category of security components, it is quite likely that either an update or a configuration change impacted the detection capabilities. Such a behavior should raise a critical alert, unless you are prone to believe that malicious actors suddenly went for a break.

Evasion detection

In the previous case, we referred to a drop for a whole category of components. Now, in the case of one single component, the most probable cause (especially if other elements on the same component are still generating data) is that the detection engine has been evaded or disrupted.

As an example, when an EDR stops providing telemetry while still sending beacons it is highly probable that it has been unhooked, thus becoming blind to anything happening on the system. A similar case can be applied to network probes. If the volume of data related to some specific type of traffic (protocol, source or destination, etc.) drops, it may be that this specific traffic has been diverted and that is probably not good.

Do not trust still waters

The Survivorship bias encourages us to interpret absence of data as something positive: obviously, nothing wrong happens. Reality is the opposite, lack of information is clearly the symptom of a problem, and the key is the ability to analyze what information is missing, on which scope and in which operational context.

Cognitive biases and security evaluation

The pentester dream: a Confirmation bias

Auditing or pentesting a system is a gratifying task, implying the recognition of exceptional technical skills and knowledge of who will perform the tests. Therefore, ending up an engagement with no valuable finding appears as a failure, especially in a context where it is well known that "all defenses can be bypassed".

As a consequence, it is very common to observe different variants of the confirmation bias which aims at altering, diverting or ignoring facts in order to confirm one's preconception.

The most usual bias is simply the arbitrary overvalued rating of a minor finding. While the severity is objectively low, it will be raised to high or severe by building improbable scenarios in which the vulnerability would be the cornerstone of a greater and critical kill chain. Another common "trick" is to apply the rating of a quite-similar vulnerability found in a completely different context and heavily documented as critical.

It also happens that serious findings have been found, but not in the scope defined by the rules of engagement. A blackbox audit has to be performed with no previous knowledge and no access to the targeted system. A 3 days pentest is a 3 days pentest, period. Unfortunately closing a 3 days blackbox engagement with no valuable finding is frustrating, especially when you can have access to some internals of the audited system and still have some free time ahead.

By themselves these biases should not be too impactful, as long as the report is read and properly analyzed by teams with the relevant skillset (but their own biases). However, in the case of an official process - such as a certification, it tends to artificially lengthen the whole process with the impact of additional delays and financial consequences one can imagine.

On top of it, it can also be another very valuable input for the Anchoring bias.

There is no 100% security: the Anchoring bias

The anchoring bias is the tendency to pick selected information that would reinforce a specific belief, discarding any context or detail that would provide a more objective vision.

Don't get me wrong, it is actually true that no security can be guaranteed 100%. This translates into "all security systems can be bypassed" and often comes along with "all firewalls / EDR / anti-spam /container-security /... solutions are the same". The combination of these 2 statements lead to an over-pessimistic conclusion that whatever one does he will be compromised.

Indeed, all security systems can be bypassed. However, such a statement should be tempered by considering the difficulty for an attacker to effectively bypass this security. A system will be breached, that is a fact. It doesn't mean it has to be breached every day, by anyone.

The second one is an easy solution, wrongly appearing as a corollary of the first: "all security systems can be bypassed as they are all the same". Wrong. All solutions are different, and this is where the important part lies. Because they will be bypassed, yes, but with different techniques, involving different skills and efforts. Acknowledging these differences implies it is necessary to evaluate the technologies involved in security engines. Once the weak spots are identified, it becomes possible - if necessary - to bridge the gap with additional solutions and technologies.

The anchoring bias encourages to ignore such approach, with the consequence to defocus from protection to massively engage on reaction. Proper security is a balance between the two. Insufficiently protected infrastructures will overload the SOC teams, eventually impacting their efficiency - and mental health.

On the other hand, an over-protected system will reach such a level of complexity that human errors will do the job for the hackers. And anyway, one day or another someone will get in; remember: all security solutions can be bypassed. Then, the lack of supervision will turn this infrastructure into a hackers' playground.

Zero-risk bias

The Zero-risk bias is the tendency to lower as much as possible the risk of one specific point of the attack surface, disregarding the more global objective of securing an entire infrastructure.

This usually happens when auditing and compliance comes first, while the initial goal is to protect. Ensuring that a security solution management is consistent and prevents erroneous or malicious actions is important. But this should not be the only focus or even the main one. But it is probably easier to state that the administration of the security is 100% compliant with corporate policy, than trying to evaluate the real gain in terms of exposure. This where the Zero-risk bias comes into play.

It implicitly validates that one doesn't invest time and resources in the tuning of the security solutions, but more on the "administrative" part which is fully compliant. Such infrastructure are usually hacked, in compliance.

Not really a big deal, as it is still possible to fall back on the Anchoring bias...

Conclusion, and a little bit about AI

Cognitive biases are known. And it is also known that they are particularly difficult to circumvent as they are part of our nature. Voluntarily or not, the human brain tends to interpret information in a certain way. Would it be to remain in comfort zone, avoid admitting mistakes or lack of knowledge, or simply reinforce immutable beliefs, we are all subjects to cognitive biases. Being aware of such logical deviance is already a good start, at least to search for solutions.

Would AI be an option? As Machine Learning models are pure math, the human factor is de facto excluded. True. But we have to keep in mind that the design of the model is made by humans. Supervised models require qualification of the training data, by humans. And the quality of a model will still be evaluated by humans, who will chose which criteria should be the most relevant for this specific model.

We can get help and AI will definitely improve our resilience to cognitive biases. But in the end we will have to do the job. Keeping technical to objectively understand the reality of the threats is a good way to dodge the common traps, or at least spot the biases of our peers...

Share
← Back to Blog