ParaCyberBellum's Blog
Entropy in IT Security

Entropy is a metric that can be used to measure disorder. According to the second law of thermodynamics the derivative of entropy is greater or equal to zero in a isolated system. Put in a more understandable way, disorder can only grow in a system on which no external force is applied.

Dura lex sed lex

This law is universal and there is no reason why it would not be applied to IT in general and cybersecurity in particular. Indeed, if we take a step back and compare current IT infrastructure with what it was a decade ago, the second law of thermodynamics jumps out. Assets and applications are spread across cloud platforms, users scattered anywhere, and processes are distributed between ephemeral containers and serverless functions. And we are not even talking about mobiles, cars, IoT and industrial systems, which embed their share of connected components.

Sticking to these evolutions to maintain an acceptable security level implies that new security technologies are deployed, thus increasing the number of solutions and related providers in the infrastructure. But IT evolution and the underlying digital transformation are not the only factors responsible for the uncontrolled growth of cybersecurity arsenal.

Threats evolve, so should a decent security platform. Attack payloads can no longer be detected by simple signatures: time has come for behavioral analysis and machine learning engines. Since the ransomware appeared, threats must be stopped before they even reach the system: sandboxing and runtime analysis technologies suddenly emerged. Attack chains get more complex and threats remain persistent: here come the EDR, MDR, NDR and XDR family.

More surface to cover, more threats, more technologies. Inevitably we end up with more solutions.

Orchestration: Managing entropy

The goal of orchestration is to provide a consistent point of control across multiple resources involved in a specific function. Secure SDWAN, SecOps and Adminssion Control are a few examples of orchestrators.

Secure SDWAN abstracts the piping and security headache of safely and efficiently connecting multiple sites and applications across different links. SecOps platform glues together detection, telemetry data, event management and incident response solutions. Admission Control in CI/CD pipelines ensures that all the components of a build comply with the security policy before being run in production.

From this standpoint orchestrators will not reduce the complexity of the infrastructure (actually it will even increase as a new component - with its necessary abstraction layers - are deployed) but they definitely lower the complexity of operations. Furthermore, adding more technologies should be seamless, providing they properly integrate with the relevant orchestrator.

Orchestration does not reduce entropy, it just makes it more manageable with one noticeable drawback: the information loss.

Indeed, the cornerstone of an orchestrator architecture is its abstraction layer. Configuration, logs and operations are converted into sets of standardized atomic data that will be processed agnotiscally of the source and target systems. If we want a real consistency across different solutions, these atoms must match the least common multiple, implying that information will be lost, inevitably. Therefore, this information loss is to be assessed to properly evaluate the real gain brought by the orchestrator.

Consolidation: Reducing entropy

If physics demonstrate that the growth of entropy is inevitable, it also gives us a clue about the fix: apply an external force to the system. In our case this force is called consolidation and it can be considered at two levels, functions and providers.

Functions consolidation

Consolidating functions simply means that we try to reduce the number of solutions, agents or platforms bound to be consistently deployed in the same location. This noticeably reduces the deployment, management and performance impact overhead.

There comes the EPP (Endpoint Protection Platform) that consolidates anti-malware, EDR, vulnerability assessment, VPN and ZeroTrust Network Access (ZTNA) client , all to be deployed on the endpoint. We will also find the UTM, putting all together routing, firewalling (next generation please), authentication, application control, intrusion prevention, ZTNA and some more functions at the network interconnection points. Same with SSE (Secure Service Edge) built to provide a unified cloud access to firewall as a service (aka cloud native firewall), CASB, secure web gateway and ZTNA (again...).

Although function consolidation seems legitimate in the purpose of entropy reduction, there are two scenarios where it will not stand.

New functions are usually brought to the scene by "pure players" before being implemented in a consolidated agent / device / platform. And in most cases waiting may (shall?) not be an option. Alien component has then to be deployed, at least temporarily, to address new threats or provide consistent security coverage of new IT infrastructure evolution in a timely manner.

Or you want the best of breed for each function, and it is a fact that there is no single provider to deliver the best security component in all the domains to cover (this is another kind of universal law, although no proof is formerly available so far). In this last case, entropy will grow at the pace of digital transformation and threats evolution: exponentially. You will then rely on an orchestrator and human resources to manage it.

Providers consolidation

Reducing the numbers of providers is one step ahead and the ultimate stage. However, relying on a single vendor for the security of completely different technical environments, such as CI/CD pipeline, OT infrastructure or SASE may not seem so "natural" at first glance. Therefore, efficiency of this consolidation is to be considered toroughly.

There are four criteria to evaluate.

First, the basement. Which components are common to all the solutions of the provider ? Usual suspects are the OS, the security modules, the CLI, the UI look & feel (if really there is nothing else...).

The second criteria is inputs and outputs. Are the API (input) and log (output) consistent across the solutions of the provider ?

Third, the orchestrator. Even if the information loss is probably much lower than that of an orchestrator managing coppletely heterogeous systems, it remains important to get a clear view of what are the real orchestration capabilities, both from the management, supervision and reaction point of view.

Last, and as it will still be necessary to deploy solutions from other providers, what are the integration capabilities with third-parties ?

No provider matches perfectly those criteria (and far from it most of the time). But the point is to evaluate the technical relevance of consolidating with one specific provider and the additional resources required to effectively benefit from this consolidation.

An arrow of time, and a conclusion

Delving further in the concept of entropy, we find that it can be used as a way to mesure time. Indeed, as it can only increase, its value is necessary linked to the time when it has been measured. Entropy becomes an arrow of time, making it a way to distinguish past from future. A consequence os this approach is that if entropy doesn't grow anymore then it means that time stopped.

In the world of IT Security this can be interpreted in a valuable way. We saw that the entropy (or disorder) of cybersecurity infrastructures are bound to two factors, IT infrastructure and threats evolution; and it is quite unlikely that this will change.

Therefore, if at some point you realize that the entropy of your security didn't grow for some time, it either means that time stopped or that you missed something...

https://www.paracyberbellum.io/blog/article/entropy_in_it_security Copied to clipboard
Share: