Keep the hands-on imperative
The first principle laid down in the Credo of Hackers 1 is split in 2 parts: "Access to computers - and anything which might teach you something about the way the world works - should be unlimited and total.", and "Always yield to the Hands-On Imperative".
The second part is no surprise considering the common belief about cyber offenders: super-expert geeks spending time on their computers trying to break things in any possible way.
Does it mean that we, simple humans, are bound to fight a losing battle against super heroes ? No.
We can still be on par with the attackers, once we accept that cyber security is not only a question of standards and compliance. It is, and remains, a technical journey.
Learn to think
It is pointless to start working on such a complex topic without being imbued with the very specific "hacker" way of thinking. As it is pointless to embrace this mindset without a solid understanding of what is really happening under the hood. Doing security requires solid technical foundations.
Knowing that hackers always find a way to bypass security is one thing. It is definitely something else to understand the process that lead from inline hooking to indirect syscalls, via unhooking and direct syscalls, in order to evade EDR detection.
It is good to know your enemy. Much better if you understand him.
Sun Tzu said: "The cyber master knows how varied a path the adversary can walk between systems, across the network" 2
Indeed, an attack chain relies on specific cornerstones: evasion, pivot, relay server, MitM, C2 channels, etc. These are the weakest points of the chain defenders must target and break. Taking down one of these single point of failure will disrupt the entire chain. This is why the deep knowledge of the attacker process is so important. Step in their shoes, imagine the scenarios and guess what would be their next step.
If you manage to think the way an attacker does and understand what technical challenges he faces, then you are already half way.
Go deep
First thing you do when you code or work on an infrastructure is mistakes. And this probably the most important lessons you can learn. Because those who designed and setup the application you are using, the network you are relying on, and the multi-cloud infrastructure hosting your CI/CD environment have made mistakes too.
When properly analyzed from a security standpoint - that is with the "hacking" spirit - these same mistakes you would have done appear to be at the core of what attackers take advantage from: design, logical and technical errors.
Deploy an SDWAN network; setup and secure a cloud-based application relying on data buckets, hosted database and serverless functions; run an Active Directory infrastructure... Play, try, attack and break it the way an attacker would, fix it then start again. Getting the inner knowledge of those components, it is the best way to invest your time.
First, we learn that we make those very basic mistakes everyone does. Although we knew those mistakes, we complained about them, we cursed those lazy engineers who did them. Still, we jeopardized the security of our system, or disrupted the operations with a nice collection of false-positive...
Then we can start delving into the security technologies we want to focus on. It is no longer question of installing and configuring things, but design and develop them. And surprisingly you will realize that some simple concepts, such as basic WAF signatures, can become incredibly complex to implement when it comes to resist evasion, avoid false-positives and maintain performances. This exercise should lead you to the most secret arcana of regular expressions. If you want to give a try at logical errors, you should try to write your own RBAC engine...
Another valuable topic is endpoint security. Coding your own malware capable to evade the EPP and EDR installed on your VM (hopefully you don't do that on your laptop...), then elaborating the most efficient YARA rule to detect your own creation is one of the most educating exercise (or, if you want to focus more on network security, write a remote exploit and the SIGMA detection rule).
Designing BPF filters, monitoring file system and registry, hardening OS, services and applications, practicing incident response, storing then cracking passwords are other good challenges to learn from.
Back to business
It would seem legitimate to argue about the necessity to comprehend the bits and bytes of attacks and defenses. The answer, however, is straightforward. It is quite unconceivable to properly protect your infrastructure if you don't know what you should protect it from. And it will also be quite difficult to achieve such goal if you don't know what the relevant technical solutions are.
On the other hand, when doing security becomes a routine, the fog begins to melt away. Attack paths get more clear, as well as the defensive options. We get to the last mile of our journey: setting up security.
"Security is a process, not a product" 3 and I agree, to some extend. Because, in the end, the device that will block the offending packet, the MFA system that will prevent (some) phishing attempts, the agent that will send telemetry and isolate a compromised system, and even the SOAR that will increase the efficiency of the incident response process; all those are products 4.
And all products have capabilities and limitations.
This is where lies the value of what we learnt: we exposed attackers and defenders technical challenges. Now we can select the products based on what we need and what they can actually do. Because we know how they work, we know the underlying technology, we know the implementation pitfalls and necessary tradeoffs. So we can test them and evaluate if they match our needs in our context. This is the only way.
Some could also point out the fact that technology is fine, but we also need a global defense strategy. I cannot agree more. And you are going to build your strategy based on your and your opponents' strengths and weaknesses, aren't you ? Back to square one.
A low hanging fruit conclusion
Would you believe crash test results, if you don't know which tests have been run and don't understand how they were performed ?
Would you really trust a lock for your frontdoor, if you knew it had been designed by someone who has no clue about lockpicking ?
Would you rely on a law office founded by someone who has a master degree in project management ?
There is no reason why it would be different in the world of IT security.
The hands-on is imperative. Period.
And by the way, you should also not trust a developer who doesn't uses semicolumns in JavaScript.