Reflections on Failure - I

Stumbling toward the doors of progress

Disclaimer: The ideas below are my own and may not reflect those of my employer.

A. Definitions of failure

The term 'fail' has all sorts of negative connotations in our culture. To make sure that we're thinking through the same concept, we'll start by choosing an operational definition of the thing we're talking about. This definition isn't meant to cover every possible use case of the term, but rather to give us some common ground to start communicating together.

We will define failure as an action for the purposes of this post. Specifically, it is an action that does not accomplish some predetermined goal - a goal that could have been fulfilled, had the outcome of the action been otherwise.

SET GOAL -> TAKE ACTION -> OBSERVE OUTCOME

When the outcome of the action accomplishes the goal, we say that the action was successful. Similarly, when the outcome of the action does not accomplish the goal, we say that the action failed. Notice that in both cases, the goal and the action itself remain constant. The only change between the two scenarios is the outcome. The outcome then determines our interpretation of the action as one of failure or of success.

Since this is a post about Cyber Security, we must acknowledge the apparent asymmetry of offense and defense. While there are no doubt important differences in the goals, economics and operations of offensive and defensive actors, notice that our definition of failure is nonetheless symmetrical for both groups. An offensive action fails when it does not accomplish the goal of the attacker, and a defensive action fails when it does not accomplish the goal of the defender.

B. Failure is not fun

It is hopefully uncontroversial to claim that most people are averse to taking actions that could accomplish their goals, but that don't. People are averse to failure. There are probably many evolutionary, psychological and cultural reasons for this to be so, but we won't concern ourselves with explanations here; we will simply take it as fact that attackers experience negative emotions when their payloads don't result in shells, and that defenders experience negative emotions when their systems get compromised.

I imagine most people need some experience with conscious mental conditioning to disassociate negative emotions from failed actions. After teaching and mentoring thousands of students in both Penetration Testing and in martial arts, I don't think I've met many (if any) people who find failure fun without additional framing first. That said, I wouldn't be terribly surprised if some subset of people don't have a natural aversion to failure. If you happen to be furrowing your eyebrows and wondering what I'm even talking about here, then I'd love to hear from you about your experiences - I imagine you have a pretty wonderful mental skill! If you are like me and tend not to enjoy failing at things, then the rest of this post is for you. It explores how we can become happier security practitioners even though we will fail time and time again.

C. The "principle" of conservation of failure in Cyber Security

The title of this section is a little bit tongue in cheek, but I do think there is a grain of truth to be found buried here. I asserted earlier that our definition of failure in Cyber Security is symmetric across the offensive / defensive divide. But in addition to being symmetric, failure is also conserved. Every time a defender's actions fail, an attacker must have succeeded in some capacity. And every time an attacker fails to gain traction (recall our restriction on which goals are allowed to be attached to failures), a defender must have succeeded in some capacity.

(Please note, I am by no means claiming that the consequences of failure are conserved. One of the most important and widely recognized asymmetries in Cyber Security is that the results of failing to execute a given attack are often trivial for the attacker, but the ramifications of letting a single attack through can be devastating for the defender. Here, I'm merely claiming that the quantity of failures are conserved, not their magnitudes.)

D. Success on one side enables progress on the other

The information security space is a rare (though as we'll see, not a unique) discipline in that its two sides share a strange symbiotic relationship, despite their zero-sum nature. As attackers discover new ways of bypassing defenses, defenders adapt and become better at discovering, preventing and mitigating attacks. Attackers then seek to understand the new defensive inventions, and learn to defeat them. This quasi-evolutionary bond is the harbinger of progress in infosec - excuse the pun. Blue relies on Red to create the pressure for more robust defenses, and Red relies on Blue to constrain the attack space so that new vectors can be found. Both sides therefore grow together, despite the fact that each individual action is necessarily a failure on one part or the other.

This depiction might convey that the attacker is always the "proactive" agent and that the defender is the "reactive" agent in the relationship. However, security is complicated, and both sides often take actions that can be interpreted as "offensive" or "defensive" depending on the frame of reference one is viewing them from.

My favorite example of this subtlety comes from the field of Malware Analysis. In his book Advanced Malware Analysis, Christopher Elisan explains how malware developers (often interpreted as the attackers) need to protect their binaries with certain features, just as malware researchers (often interpreted as the defenders) "attack" malware programs through analysis to understand their directives. As analysts discover more holes in the structure of malware, developers implement increasingly cunning protective mechanisms. Again, both sides improve via their combined failures.

E. Failure is inevitable

There are many reasons why failures in security are inevitable. As I wrote previously, the human minds practicing security are fatally flawed and will therefore make mistakes over time. And even if our reasoning abilities were free of bias, we would still not know everything there is to know about every possible system. Security is about reasoning under uncertainty for both attacker and defender, and sometimes our uncertainty will result in failure. None of us know how to avoid all mistakes in our code, all configuration errors, and all deployment issues. Further still, learning technical skills in general and "security" in particular requires a large amount of trial and error over time.

But we can momentarily disregard our biased minds, the practically unbridgeable gap between what we can know and what is true, and even the simple need to learn skills and knowledge on both sides of the fence. The inevitability of failure follows directly from our earlier observation about conservation. If failure is conserved between red and blue, then every action in this space can be interpreted as one. The important question to ask ourselves therefore isn't "How do we avoid failure?", but rather "Given that failure is inevitable, how do we make make the best of it?"

F. Making failure (more) fun

So far we have chosen to define failure as a property of actions that do not accomplish the desired outcomes of predetermined goals. We have acknowledged that for most people, failing to accomplish their goals is unpleasant, and we have also observed that failure in the domain of information security is conserved: attackers win when defenders lose and vice versa. Despite this zero-sum relationship, each side benefits from the successes of the other. Finally, we've deduced that failure is inevitable for a myriad of reasons and we're now left wondering what to do about it.

We could throw our hands up in despair. If failure is inevitable, why should we even try? But this kind of fatalism falls apart before it even gets off the ground. We know just from looking around the Cyber Security community that there really are knowledgeable people out there. Somehow they have acquired their skills and abilities despite failing an innumerable amount of times. Additionally, we've identified a mechanism through which both offense and defense can make progress by competing and failing together. So we need not spend any more time convincing ourselves that failure is useful. However, since we know we'll be experiencing a lot of it, we might as well try to make it more pleasant. How can we make failure more fun? We'll try to consider this question in Part 2 of this post!