During my first week running cybersecurity for a major university, the security lead for the supercomputing center popped into my office for what I thought was a meet and greet. While it was that, he shared that they, along with multiple other universities and government agencies, were in the midst of what at the time may have been the largest cyberattack in history. He wanted to know how my office wanted to be involved. For context, I wasn’t just new to the position but new to the field.
That incident accurately describes the state of cybersecurity almost 25 years ago: a bright teenager, sitting in Sweden, could penetrate some of the most precious research environments not through some NSA-esque malware of breathtaking sophistication, but largely by exploiting vulnerable systems and being extremely fast on his keyboard. Lacking as we were, a large scale framework for a coordinated response, I watched as seasoned professionals struggled to communicate securely, struggled to alert others, struggled to enlist Federal law enforcement, and basically struggled to keep up with a noxious brat1.
In many ways the field of cybersecurity has come a long way in the intervening decades since that attack. My colleagues are serious, trained professionals, with few gray beards like myself remaining that fell accidentally into cybersecurity from distant fields. Until being dismantled by a leading crime family, CISA and the FBI had become essential partners for security professionals - critically important as formerly adversarial nation states have professionalized and expanded their cyber espionage activities. We now have a mature market for cybersecurity tools, and the major cloud vendors have invested in global class cyber teams. Within higher education, most universities have (albeit understaffed) security offices; they have or are deploying, at least within enterprise IT, core cybersecurity controls and practices; that is we have moved the needle on cyber as far as existing resources permit.
Yet the disparity between what we wish for and how we operate is startling. We security practitioners bemoan our exclusion from the strategic activities of many organizations, yet the field as practiced is mired in minutiae. We lack a long-term narrative arc and thus struggle to weave our story into that of our nation or our parent organizations. We move from point to point but with no real sense of where we’re going. At times I’ve feared cybersecurity is intrinsically tactical and to pursue a strategy, pure folly.
Nevertheless, in the modern, highly connected digitally defined world, it’s hard to imagine a dimension of organizational strategy that wouldn’t benefit from being informed by contemporary cyber paradigms. Yet within the higher education space most CISOs function as directors of security operations, and are rarely afforded any ‘C-suite’ agency to significantly inform or contribute to organizational strategy beyond the narrowest swimlane. In short, within most universities, there is no cybersecurity forest, merely trees.
Things are a bit better, er, different in the private sector. There we see ample opportunity for cyber to engage with CEOs and boards, but within the private sector cyber operates in service to the profit motive. “Strategic” is narrowly defined in terms of market differentiation and maximizing shareholder return. Not in any consumer or national best interest.
So what constitutes the cybersecurity forest? How do we gain enough altitude to see its outlines without getting lost in the clouds? What follows are a number of open questions that I believe help give shape to that forest; questions we must tackle over the next decade. I wouldn’t call it a research program but areas we need to collectively address (study, legislate, implement) if we want to truly change how we’re currently operating. Thus I want to propose some of the dimensions to a cybersecurity narrative arc. Let me begin this arc by describing where we are today.
We operate in a world of highly interconnected systems and networks, little of which has been constructed to be secure or resilient. “Accretion” may best describe how we got here. As we procure more and more systems and connect them together, we do so with flawed and insecure products, for which the producers bear little or no accountability for their resilience. Worse, we lack effective means for evaluating the quality of the security baked into these products, nor any way to benefit from the analysis of said products by others. Thus we pay a premium for additional security functions, a premium for products to assist in identifying or patching flaws in said products, a premium for trying to understand how flawed the products are, and a premium in cyber insurance to insulate us from the consequences of using these flawed products. We rarely know who our users are, but we know our users are forced to generate new, unique personas for every service they use, ensuring that they’re too overwhelmed with credential and account management to protect our systems or their own data from theft and abuse. Severance isn’t just dystopian sci fi, it describes how our digital identities currently live - an ecosystems of innies each blind to each other, and of such scale our one outie can’t keep track of them2. Our security teams have little influence in helping build holistic resilience into our IT infrastructure or business processes and are not seen as thought leaders but rather cost centers to be value engineered. Finally, the national ecosystem of cyber protections is being largely abandoned as our own government’s alignment with criminal organizations is codified. Is it any wonder that CISOs are dropping like flies?
As the world transitions from one of hard power conflicts, to one where digital power defines national and geopolitical boundaries, it seems obvious that information assurance3 must similarly mature. Not only is this necessary to stabilize economic and communicative activities, but critically, cybersecurity and privacy practices allow consumers and citizens effective civic engagement without entering into a quid pro quo with their civil rights. With the collapse of pax americana and the replacement of democracy with oligarchy, it strikes me as clear that we are far, far behind the curve.
It may be discouraging at this particular historical moment to consider the prospects for addressing many of these issues. Some, perhaps the more critical questions, truly require a nation state response: an investment of national proportions with supporting regulations and enforcement. Something highly unlikely in our current political environment. Be that as it may, I am posing these as challenges without immediately making my own suggestions about how to move these forward. I, for one, truly have faith in our collective intelligence and believe we can accomplish much despite the choir of harbingers ringing in our ears.
Please note that I’m neither foolish nor arrogant enough to think this list is perfect or even comprehensive (as I’m sure the comments will show). Most people will find it idiosyncratic and I’d be the last one to argue against that. It’s largely borne of my personal experiences as a CISO, rather than some discrete analysis. I also found it difficult to find the right level of abstraction for the questions. It’s easy to extrapolate many of the challenges to such a high level that it precludes any practical actions. For example it would be fair to say that one challenge would be figuring out ‘how to secure hypercomplex and interconnected systems’. Sure, and perhaps if any open research survives the next decade, someone could fund tackling that, and then talk congress into enacting the necessary legislative framework to enforce it. But I’d prefer to frame these questions more in the middle tier, between in-the-weeds technical challenges and questions of theology.
So let’s get to the questions first, and the exegesis to follow.
Supply chain confidence
Cybersecurity costs as a tax
Securing fundamentally insecure systems
Software vulnerability detection
Strong user identification (i.e., ubiquitous identity management)
The ‘horse has left the barn’ problem of PII
How to situate cybersecurity within organizations, positionally and financially
Untrustworthiness of the Federal Government as a partner
Expanded:
Supply chain confidence.
The ‘supply chain’ is all the stuff we buy and services we use produced by someone other than ourselves. Ever since SolarWinds a lot has been written on the question of securing the supply chain but it is difficult to point at any significant progress at preventing a repeat of it. If we surveyed software vendors with the question “what practices have you implemented in your SDLC in response to SolarWinds?” I suspect the responses would fit on an index card.
I begin with the question how do we attain confidence in our supply chain because it’s both interesting (just what do we mean by ‘confidence’?) and contained within it are a host of dependent questions. Importantly, because we’re largely discussing software and services as commerce and not merely technology, any approach to having confidence in the supply chain necessarily includes procurement and contract law, liability questions, and regulatory issues. These are areas desperately in need of reform but rarely seen as a domain for cybersecurity professionals. Many of the approaches to supply chain management being taken today are frankly performative. Solutions such as vendor certifications or requiring software bills of goods are minor incremental improvements but feel more like contractual liability issues, not solutions that actually protect data and services. We need to admit that not only does no one have any idea how to ‘buy’ confidence in a software product or cloud service, but further, simply performing a due diligence review of each represents an enormous tax on the economy. Redundant, expensive, performative, and ineffective. Given our growing dependence on outsourcing and cloud services, it’s hard to imagine a more worthy open problem.
Cybersecurity costs as a tax
Enterprise software4 usually is delivered in multiple flavors. By default products deliver the user facing features that a company’s marketing department feels potential customers will find valuable. Typically the features that the security or privacy team wants to see, everything from database encryption to activity logs, to user control of data usage are an added cost, rather than a native part of the product. Companies will often, if they’re small enough, agree to address a security request, but “that costs us real money” is the refrain, so ka-ching! An extra cost. Sure, paying programmers to build secure software or providing the log data necessary to monitor for attacks and investigate compromises consumes resources. But why are these an extra cost? Are security features today’s seatbelts? We pay for breaches, breach insurance, vulnerability detection, upgrade and patch disruption, all post-purchase. Let’s move those costs into coding and architecting secure products.
Securing fundamentally insecure systems
I take it as axiomatic that every software product is fundamentally flawed. The bulk of OT and IoT is intrinsically insecure and likely unsecurable. None of it is designed within any sort of cohesive, holistic framework. Yet we must meet the obligation of running it securely. How? Is there a philosophy of architecting and operating our IT and data environments securely that can be effective despite the weaknesses in each element? Resilience needs to become more than having redundant systems.
Software vulnerability detection
Humans write software, humans are flawed creatures (and before anyone says “AI”, remember who created AI and who it’s trained on). This is obviously a corollary to question number three, and one that has also spawned a huge industry. (Think about that for a second; the products you’re paying millions of dollars for are of such poor quality, that you have to pay for yet more software to tell you where all the flaws are. Sadly, when the flaws are found you sit around and hope the vendor bothers to patch them.) I suspect that the answer here has less to do with better code or application scanning, and more to do with legislation. Companies simply won’t invest in secure coding and robust solutions until they’re fiscally and criminally liable for their products. (Oops, I promised to avoid offering solutions, but sometimes it’s hard not to show your cards).
Strong user identification (i.e., ubiquitous identity management)
This one may appear to be in the weeds, but I think the impact of identity management writ large in society needs much more attention than it’s currently receiving. Attacks on voting integrity begin as an identity management problem. Frustration that government services all seem blind to one another? Identity theft? All fabric held up by the identity management tent pole. Obviously it’s an open question if we’d be better off retaining our massively heterogeneous identity universe (I have 700 accounts in my password manager) or consolidating into one commercial or governmental persona. My concern is that if we don’t tackle this, the credit bureaus will. Whatever we do, we need to stop pushing this problem onto the consumers. For me the connection from user identity → access control is so naturally a central cybersecurity concern, I’m surprised how many security teams interact with identity teams only through the lens of credential management.
The ‘horse has left the barn’ problem
Protecting PII is over. We’re fighting minor performative skirmishes when the war has been lost. I and a thousand others have talked about how to ‘protect yourself from identity theft’ - meanwhile as I protect my gmail account or credit report, my hospital is crushed by ransomware, my employer loses all my data, OPM is hacked by the Chinese, and now children working for Elon Musk have all my federal data. How do we operate in this world? What’s our obligation as IT providers and security professionals? Yelling more at users about how to protect their accounts is little more than “old man yells at cloud” and only serves to distract from our own failures. My instinct tells me we need to reframe this problem and stop looking for ways to plug holes in the dike. So what to do? Is the only answer apathy or cynicism? I hope not.
How to situate cybersecurity within organizations, positionally and financially
This is mostly a question for higher education. It’s fascinating to note that while in the private/commercial sector, 80% of CISOs report to the CEO, in higher ed it is the rare CISO that doesn’t report to the CIO. I could talk about the problems with this, but fundamentally, the CIO manages a cost center. Cut your IT budget? Sure! Fewer services or less quality of service. But you can’t value engineer risk out of the equation. Managing risk (even under shrinking budgets) is a strategically different issue than managing IT. It’s sad, tbh, how often leaders start an assessment of their cyber posture by asking about compliance or controls. Start with budgets and reporting lines, the rest are minor elaborations.
Untrustworthiness of the Federal Government as a partner
I almost called this one ‘Living under a totalitarian state’. The Federal bureaucracy in general has or is collapsing, as is the rule of law. Obviously the government’s role as a nonpartisan service organization has been written out of the play. If we couple this with the deep penetration of adversarial assets into the intelligence and law enforcement agencies, as well as the destruction of the classified data protection machinery, I see two specific issues we should tackle5. First, much of modern cybersecurity is shaped by geo-politics. Though the large data brokers and tech firms provide a lot of valuable threat intelligence, the Federal government has remained a unique and important source for business and academia. The deep integration of nation state activities with criminal enterprises is undeniable (a nice discussion of the operational nuance to this is here). Yet with the US Government now allied (and possibly cooperating with) these hostile states, how do we address the need for this important and non-public threat intelligence? Second, how to prevent the penetration into systems and the theft of data by the US Government. Given the legal instruments available to the government what technologies can assist in protecting PII and private communications? Or more broadly, entire targeted groups?
Ontology of Questions
As I reflected on this list, I was naturally putting them into categories (sadly this is classically how I work: identify items, classify them second.)
Technical & Engineering Challenges: Questions related to the inherent technical difficulties and limitations in building secure and trustworthy systems. Questions 3, 4, & 5.
Economic & Resource Questions: Questions that focus on the financial implications and resource allocation related to cybersecurity. Questions 2 & 7.
Trust & Governance Questions: Questions that address issues of trust in institutions, governance structures, and partnerships. Questions 1, 7, & 8.
Strategic & Philosophical Questions: Broader, more fundamental questions about the overall approach to cybersecurity, including philosophical outlooks and high-level strategic dilemmas. Questions 6, 7, and 8.
Organizational & Positional Questions: Questions relating to how cybersecurity is structured, managed, and positioned within organizations, both financially and structurally. Question 7.
It would make a fascinating three day workshop to take each of these domains and brainstorm with a broad group of experts a richer set of open questions in each. The fact that many questions belong to multiple areas suggest they may benefit from a narrower framing, or my ontology is poor. Or simply that they’re complex questions that operate across many domains (which strikes me as the most likely answer). The lack of any mention of AI is deliberate. That’s currently such a complex and evolving domain, as much political as technical, and at the edge of my own expertise that I’m saving any discussion of AI for a dedicated post. Hopefully without running afoul of this.
In future posts, I’ll break some of these down and discuss some possible directions for addressing them, hopefully I’ll be able to interview some of my smarter colleagues on the issues arising from these questions. Though of course, the hardest part will be to make progress on them, once reasonable courses of action are identified. Please feel free to use the comments to respond to, disagree, or wax poetic about alternatives.
I should probably point out that Stakkato was caught, and that the very colleagues I mention were instrumental in both constraining him and providing valuable intelligence to Federal law enforcement. It truly was heroic and the lessons learned from their efforts are as valuable today as they were decades ago.
Yes, I’m bending the innie and outie relationship here a bit. Still makes for an effective simile.
I use the term ‘information assurance’ as the umbrella for cybersecurity, privacy, and data management and curation writ large.
‘Enterprise software’ is all the ordinary software we use in everyday life and business. From GMail to O365 to PeopleSoft. It’s all most people know, but research and academic environments are quite different, often permeated by custom and niche software.
Among so, so many.


