In a recent column I was asked about how we move beyond the trope “it’s not if you’re going to have a breach, but when”. I think the essential issue here for cybersecurity is what’s known as epistemic humility, that is, acknowledging that we have an imperfect knowledge of system vulnerabilities, nor can we predict every ingenious method an attacker might devise. Further, it recognizes that we cannot fully control external forces (motivated attackers) or internal complexities (human error, intricate software interactions), and thus we return to our trope.
I wanted to reflect on the idea of epistemic humility because the term introduces what is less a simple acknowledgement of ignorance (you can’t know everything) and more of a connection to an attitude or a posture1 worthy of consideration, that of humility. In its richness, the concept of humility strikes me as intrinsically human and valuable for the practice of cybersecurity.
It’s also striking how distinctly unhumble the cybersecurity marketplace has become - every vendor is named ‘iron’ this, or something ‘black’; citadels, strikes, and talons proliferate to the point where the market feels more like a fantasy first-person shooter than what it truly is, merely variations on threat and intrusion detection. I suppose this is a necessary element of successful marketing and my distaste for it explains why I’m not a billionaire. Aligning your product with the (all so real) militarization of cyber also aligns your rate card with military-grade pricing. No one wants to buy the Saint Francis firewall2.
To be sure, there is wide recognition of how weak our position is in the cyber arms race - while I’ve known plenty of security professionals who brand themselves as warriors and experience a frisson when saying ‘APT’ - most of the people I’ve worked with see this as silly, merely a form of cosplay. Yet it’s difficult to even discuss cybersecurity in anything but terms of warcraft. We battle hackers, we harden systems, we identify threat agents and deploy firewalls and intrusion detection systems. As accurate as those terms are, I wonder if we’ve thought through the implications of this framing of our field, and if it works against us at some level.
Alternatively, I (and others) have tried on medical metaphors - immune system, contamination, infection, but largely these haven’t stuck. Perhaps this is more for sociological and psychological reasons than a poor fit for purpose. Although, given our propensity to buy snake oil that promises to enhance our health, I’m not sure this alternative is any better.
Which brings me back to humility. I should be clear to distinguish what I mean from that other notion of humility, stemming from ‘low’ status e.g., the humble peasants, or non-ostentatious, for example, my humble abode. No, for our purposes humility (the noun) gains its meaning from epistemic (the adjective), and thus pertaining to knowledge and by extension, how we learn and know anything (epistemology)3.
So perhaps there are two dimensions to explore. First, how do we operationalize responding to our acknowledgement of permanent imperfect knowledge; and second, does this create an opportunity to reshape the language we use to describe the how, what, and why of cybersecurity?
Clearly, responding to imperfect knowledge is far more difficult than it is to address the known. We know firewalls establish an ingress policy for network traffic; we know certain files or executable behaviors are indicative of malware; we know specific domains or IPs are ‘bad’ and should be blocked. So it’s no surprise we constantly look for firewalls that permit more sophisticated rule sets, anti-malware products that better recognize malicious activity or newly identified malware. We optimize our use of threat feeds. All of this is logical and has a recognized ROI. But it also stems from a very specific set of ‘things we already know’ and while it’s a bit harsh to say, the success of this sort of optimization feels dangerously close to confirmation bias. And let’s be honest: it’s easy. Once you’ve selected and deployed a product, then optimizing or improving that selection isn’t a hard problem (other than paying for it).
What is hard is deploying and architecting systems and networks to be robust and resilient, despite knowing that they’re flawed and susceptible to misuse. This is particularly difficult because, in the spirit of epistemic humility, we haven’t any idea how they’re flawed or what sort of misuse they may suffer. I’ll expand on this question in a later post as it harkens back to my open question number 3 (securing fundamentally insecure systems). It does feel safe to say that there’s no simple answer to this challenge. Despite this, the path to a solution seems to be to give this question more attention than it currently receives. Are we committing sufficient time to building architectures and deployment models that assume systems are flawed? Is this the same as assuming all systems are untrustworthy until proven otherwise as zero trust models propose? I don’t think so, but would imagine the two issues (zero trust and insecure systems) are loosely coupled.
To be fair, many of these well understood solutions do help even with some unknowns. But this is somewhat like the old joke about the guy looking for his lost keys under a streetlight. When asked if he lost them there, he responds “no, but the light’s better here”.
I know that I never gave this issue enough time, and I suspect that’s true for most shops. As I work through this however, I can’t help but ask if our struggle to secure our environments has as much to do with our reliance on well-worn technique and a failure to grapple with this harder issue. Does our confidence in these traditional tactics represent a kind of arrogance on our part, or is it merely comfort with the familiar? As we tackle individual threats, it makes sense to reach for tools crafted for each threat; only a fool uses a hammer to tighten a screw. Be that as it may, I can’t help but believe that our continued failure to secure our environments, despite the arsenal in our tool chests suggests we’re ignoring the broader world outside the streetlight.
If part of our challenge is to think more holistically about the reality of insecure systems within our ecosystems, perhaps the other part is to speak differently, informed by epistemic humility, about this and related challenges. Can, or should we, even try to remove the militaristic language and framing from cybersecurity? Would doing so help us be more successful in communicating about it to either the general public or our internal organizations? Or do we simply need to acknowledge that cybersecurity is fundamentally a kind of combat operations and we should think of our activities as engagement protocols? As we analyze threat agents for their capabilities and tactics are we really engaging in adversarial dynamics4?
I’m still trying to figure this out for myself. Talking in these terms is so normal, it’s difficult to change. It’s like trying to talk about people and the world around us without using the language of capitalism: we don’t have people, we have a workforce; we don’t have forests, we have resources. It is interesting how the warfare model subtly shifts attention away from the intrinsic flaws of software and systems, and with it liability, from the manufacturer and to the attacker. As if to say, “it’s okay that we sold you a product that’s full of vulnerabilities, if only some hacker wasn’t taking advantage of it.” No doubt if everyone agreed to stop causing car accidents we could do away with pesky seatbelts.
I would prefer that our software and systems were intrinsically robust regardless of any specific threat actor or threat vector. As the foundation of 21st century civic society, and the structure supporting modern research, art, and discourse, our digital ecosystem probably warrants better than duct tape or thoughts and prayers holding it together. Of course, I’m painting myself into a corner here - simply saying “we deserve better” hasn’t worked for any political movement, nor will it work here.
As I reread what I’ve written, it seems I’m starting to argue that at least one possible direction is to argue for a kind of quality in our systems. Quality in the sense of a degree of excellence as the intrinsic nature of our systems. I’m reminded, however, of once sitting in a random vice-president’s office and listening to a discussion of an issue, helping make the case that we (our university) needed to address this problem. He said “find out what the Big-10 are doing, I want to be right in the middle”. Mediocrity as a budgetary control.
The literature on epistemic humility is extensive and each philosophical school has its own interpretation, but for our purposes I think we can take a much simpler path. That is, in our recognition that our knowledge is incomplete - we only have partial threat models and are unable to fully describe a threat actor - we must include these elements of ambiguity in our descriptions of risks. This can of course be quite tricky. Our leadership rarely wants to be told, “here’s a risk” without a concurrent “and here’s what we’re doing about it.” It may be, that all we can do for some scenarios (that once-in-a-decade zero-day) is to enhance the timeliness of our detection mechanisms and incident response processes. But the critical point here is not how we operationalize these, but the style in which we couple incomplete knowledge to them. The need to plan and drill stem not from fear, uncertainty, and doubt of or about a breach, but is an innate consequence of living with incomplete knowledge, that is, epistemic humility.
At this point, unlike the man under the streetlight, we both acknowledge the darkness outside our cone of illumination, and explain that we have strategies for mitigating the risks of what lives in that murkiness.
Essentially I’m arguing that our success in discussing cybersecurity would benefit from adopting an intellectual style that uses the language of resilience and acknowledges our inherently weak posture due to our incomplete knowledge of the ecosystem we’re charged with securing. I’m not sure even that last phrase “ecosystem we’re charged with securing” is appropriate. “To secure” strikes me as anything but humble and is technically unachievable. Perhaps “ecosystem we’re charged with strengthening”? Or simply “ecosystem we’re making more resilient”?
By explicitly introducing the idea of inherent weakness into our discourse, we can create a space for conversation about why we need so many layered cybersecurity controls, and reduce the resistance to incident response drills. Perhaps this can also push us to spend less time on the well worn paths of cybersecurity operations, and to devote ourselves to confronting the unknown.
I’m not sure if I should add either the word “emotional” or “intellectual” before the phrase “attitude or a posture” in this sentence, both seem appropriate and human.
Though who knows. Maybe there’s a market for Christian security hardware. “Our threat feed is approved by Opus Dei; now with more sins identified!”
I’ve always found epistemology to be the key to internal reflection; asking “how do I know that” or “why do I believe that?” can lay bare the contours of your soul and not merely your intellect.
It’s a bit amazing to me how easily this sort of thing rolls off one’s tongue. I feel compelled to point out that many of the actual weed-crawlers I’ve known over the years have been an interesting mix of bravado and humility. Unlike those that are performative “warriors” in cyber, they recognize that survival requires more than the projection of strength, but a deep respect for the unknown.
Love the application of “epistemological humility” to cybersecurity. It would be interesting to tease out how this approach informs the following “Is this the same as assuming all systems are untrustworthy until proven otherwise as zero trust models propose? I don’t think so, but would imagine the two issues (zero trust and insecure systems) are loosely coupled.” Perhaps understanding the gaps in program maturity and architecture is where humility translates into better informed risk management…focused on actual institutional risks…. Great topic!