Open question no. 1: supply chain confidence
Liability rituals
I begin with the question how do we attain confidence in our supply chain because it’s both interesting (just what do we mean by ‘confidence’?) and contained within it are a host of dependent questions. Importantly, because we’re largely discussing software and services as commerce and not merely technology, any approach to having confidence in the supply chain necessarily includes procurement and contract law, liability questions, and regulatory issues. These are areas desperately in need of reform but rarely seen as a domain for cybersecurity professionals. Many of the approaches to supply chain management being taken today are frankly performative. Solutions such as vendor certifications or requiring software bills of goods are minor incremental improvements but feel more like contractual liability issues, not solutions that actually protect data and services. We need to admit that not only does no one have any idea how to ‘buy’ confidence in a software product or cloud service, but further, simply performing a due diligence review of each represents an enormous tax on the economy. Redundant, expensive, performative, and ineffective. Given our growing dependence on outsourcing and cloud services, it’s hard to imagine a more worthy open problem.
We are farmers of digital grain. We walk the landscape and sow the seeds of tomorrow’s crop; a crop of software, services, and infrastructure we use to support our community. We harvest stock that contains not merely nutrients, but also traces of all we have used to enrich the soil. When we buy a piece of fruit, we look at its shape and color, its scent and firmness, and we take note of its origin which we use as a proxy for a likelihood of contamination. We think of the supply chain, of recalls and adulteration worries; is it truly fresh?
As with the origin of fruit, this concern about the provenance of our tools - that collection of software, services, and infrastructure we regularly harvest - has become central to the identification of risk in our environment. Is it well made? Do worms lurk below the surface, and who has handled it and when? For the average cyber professional, ‘supply chain’ became an important term of art in 2020 following the SolarWinds attack by Russian hackers1, although the phrase had been commonly used in the cybersecurity space for at least the decade preceding that2.
For most of us outside of critical infrastructure (water, power, aviation, telecom), notions of protecting the supply chain largely grew out of the due diligence we put into reviewing vendor contracts. Those were the early days before the majority of our services were delivered by cloud vendors3. Even in these early days, before SolarWinds forced us to rethink the risk of modern services, we struggled to make headway in these contracts. By the time we were negotiating an agreement we had emotionally committed to a product; the vendor knew we were unlikely to change, and frankly we were naïve about what to ask for. I recall arguing with my staff that it really wasn’t worth investigating a vendor’s code management processes, that it was too in the weeds and not a high risk area of enquiry. SolarWinds revealed the flaw in that reasoning - though I still suspect it was an outlier rather than a harbinger.
The notion that I, an ordinary, albeit professional, consumer of software or services, am going to meaningfully review the IT and development processes of every product or service I buy is laughable - the equivalent of testing each individual piece of fruit. Yet here we are - and worse, we’re all testing the same piece of fruit. The challenge becomes infinitely recursive. We buy a piece of software or rent a service, that is itself built on collections of other software - a supply chain within a supply chain - each a sinkhole that needs monitoring, updating, and analysis4. Sure some vendors go to great lengths to secure audits demonstrating their compliance with some standard or another, which is all well and good, but I would argue these are less about securing your confidence than providing a shield against liability.
I don’t want to be seen as totally dismissing our efforts. Tools like the HECVAT are terrific in that they give us a framework that both streamlines and brings consistency to our vendor enquiries. It and similar approaches provide a starting point, a structure to guide our interrogation of a vendor. My reluctance is founded on two elements: the sheer scale of effort required at even a basic level of review, and my skepticism that even a perfectly executed vendor review is capable of providing the level of understanding required to anticipate or prevent the next SolarWinds.
I do think there’s another dimension to this discussion that isn’t often articulated. While contractual clauses largely operate as liability shields for companies, our internal contract and product review processes, despite their burden, act similarly for us. We insist on and establish a security or privacy review and it allows us to claim compliance with standards such as NIST 800-53 and to assert that due diligence has been performed. This is particularly important since it’s unclear that we truly act on these assessments. Perhaps I’m an outlier, but over the course of my career I can only think of a few occasions where a product was actually rejected due to a security review. We did manage to get additional contractual protections in place, such as greater liability insurance limits, or faster breach notifications. In some cases a vendor might add encryption to data at rest, but did any of this truly give us confidence in the product? Or are they merely liability rituals, ceremonies for which we serve as the high priests5?
And that leads to a more fundamental question: what do we actually mean by confidence in the supply chain? I want assurance that a service or product is free of defects, that it includes the necessary security components, and that I have complete knowledge of what it contains. I want to know that the vendor is managing its own supply chain responsibly from components to development processes. To extend our analogy with food, I want the equivalent of the nutritional facts label and a regulatory framework surrounding it6.
Essentially I want to know a product or service isn’t introducing new risk to my organization, and I want to know enough about a product that I can factor any potential risk into my overall risk analysis. I want good stuff, that doesn’t make me sick. It really doesn’t seem like too much to ask.
But of course it’s difficult to imagine such a world. We devote a growing portion of our resources to the incantation of product and contract review. We scan our environments for known vulnerabilities and for every new product we find more flaws. We spend scarce time trying to get vendors to acknowledge flaws and expose ourselves to the consequences of unpatched or unpatchable flaws in the very products that our operations and missions depend on. We have normalized - if not reified - all of this as acceptable.
Why do we persist in this theater? Perhaps this is because our institutions have trained us to equate proof with safety. Our reviews conjure the proof, and from there we adopt a posture of relief, or at least confidence. But the truth is older and simpler: trust is not an artifact of process, it is a relationship. And like any relationship, it depends on transparency, reciprocity, and a willingness to be vulnerable - qualities our bureaucratic structures are ill-equipped to cultivate. Our vendors know this. They recognize that the transaction itself is not one of trust, yet even a transactional exchange relies on a kind of performed intimacy. This is why they recruit charming people who try so hard to establish that human connection through dinners and regular calls, briefings and drinks. They want us to proxy confidence with a friendly, trusting face.
Our fixation on process doesn’t end with internal reviews; it extends upward into how we imagine safety at scale. Regulation, too, is a kind of bureaucratized trust - a system designed to substitute oversight for relationship. But when enforcement is weak or incentives are misaligned, regulation merely formalizes the same hollow confidence we find in vendor checklists. The machinery of proof replaces the substance of assurance.
I have often argued that we are not merely curators of risk, but of trust as well. However we also need to recognize that our trust in products and vendors is misplaced. Every product or service we buy is flawed.
Vendors have simply made a business decision that it’s cheaper to patch flaws than engineer in a fashion to avoid them7. It’s cheaper for them because we collectively fail to hold them accountable. We accept the normalized practice of investing a large percentage of our security resources into vulnerability management, curating a vast portfolio of flawed products. We facilitate their profits at our expense. We suspend disbelief and accept the status quo since we see no other alternative. “Until I know this sure uncertainty, I’ll entertain the offered fallacy”8.
I have no pat answer to the supply chain challenge. My own approach to this was to look at procurement in the same way I looked at all matters of risk, by focusing on the plutonium and ignoring the aluminum. For the high-risk (plutonium) services I pushed for vendors that could produce a collection of standards and audits.
Ironically, these were the very vendors we were both least likely to move and the least prepared to interrogate. Is it truly meaningful for your team to argue with an Oracle or Amazon about their IT management practices? And if they refuse to budge, is your management going to abandon a Google or Microsoft?
In practice, the more pernicious risk comes from the 100,000 other products and services (the aluminum), often created by startups with little more than the ability to spell “AI”, that are adopted throughout your environment. A laptop here, a student developed project there all create pathways from the unfiltered internet into your network. It’s hallucinatory to think the average academic environment will be willing to put at risk their agility and pursuit of novelty - core to the educational and research mission - by attempting to prevent this. Indeed, my advice is to rearchitect and segment networks to support this speculative activity. A kind of resigned pragmatism.
It’s tempting to look for other domains of regulation for models. Perhaps nutritional supplements? Supplements are notoriously under-regulated and what regulation exists is scarcely enforced9. However, in theory the FDA can remove unsafe products, oversee manufacturing, require accurate labeling, and monitor adverse events. Could we apply this model to software? The answer reveals why supply chain confidence is so elusive. Unlike supplements, software products are infrastructure—removing a flawed ERP system would instantly disable organizations depending on it. Announcing a vulnerability doesn’t protect users; it notifies every attacker globally while users scramble to patch or replace. The supplement model assumes products are replaceable and removal is protective. Software is neither.
The true regulatory parallel for software lies not in removal, but in enforcement and accountability: we must apply the rigor of current Good Manufacturing Practices (cGMP) to the digital supply chain. The predictable objection is that standards will stifle innovation and burden small businesses. But we’ve heard this refrain before - against every act of consumer protection from seatbelts to food labeling - and industries have always adapted. What we cannot continue to afford is treating cybersecurity as an externality, its costs borne by customers while vendors profit from flawed design. The “innovation” argument mistakes convenience for progress and self-interest for principle.
My own recommendation is that we need to begin by attacking the supply chain problem through vendor liability. We can architect and engineer solutions to software and service reliability until the cows come home, but without real, significant, and crippling liability for product flaws, it’s difficult to believe manufacturers will adopt them. When I think about the hurdles to achieving this, it feels oddly similar to the struggle of ordinary citizens to compete with the infinitely deep pockets of lobbyists in the struggle for political attention. We send $5 or $10 to a candidate, millions of voters at a time, only to see a member of the oligarchy drop $250 million to elect his favorite candidate.
That is to say, while collective action might be our only path to reform, we face a seemingly insurmountable structural hurdle: the only voices with access to policymakers belong to the vendors we depend on. The supply chain problem is political, not technological—and in that political contest, we’re clearly at a disadvantage.
So we’re left curating risk in a broken market, performing security theater to satisfy compliance requirements, and hoping we’re not the next SolarWinds. It’s not a satisfying conclusion. But recognizing that we’re managing symptoms rather than causes—that’s at least honest. And in a field drowning in happy talk about ‘security maturity models’ and ‘zero trust architectures,’ honesty might be the most subversive thing we can offer.
In short, I think our community needs to organize and plan for a time when a coordinated effort to influence policy makers may bear fruit. We need to think about what sorts of triggering events might kick off such a campaign, and work across institutions and organizations on a consistent message. That message would be, insecurity shouldn’t be a business model.
Ah, the quaint days when Russian hackers were the bad guys, and not being called upon and embraced by US leadership.
The earliest reference I can vaguely recall is from 2012 when the EU released a report highlighting the importance of security and the supply chain for what it called ICT or “Information and Communication Technology.”
Just for context, I was involved in negotiating a contract for Gmail I think somewhere in the 2004-6 timeframe. It was not a lot of fun. Google had what we wanted, and had 100 attorneys for each one of us on the negotiating team.
It’s worth asking: are security reviews truly negotiations with vendors, or are they negotiations with our organizations? Merely exercises that answer the question, ‘why are you buying this and just how planning to use it?’
In fact, recently there’s been interest in establishing a software bill of materials (or SBOM) to start to address the knowledge problem. CISA has a webpage on the subject.
A reasonable counterposition is to acknowledge that some vendors do invest heavily in security. But equally so, it is comically obvious that these are half measures, even at the largest firms.
Shakespeare, Measure for Measure.
I have to point out, that with minimal enforcement, even these actions are largely aspirational. Anyone who’s looked into nutritional supplements immediately recognizes that the market is a grifter’s paradise.




Well said. And why I may be slightly less “cynical” than you are regarding futility of efforts to improve “program maturity” and deploy “zero trust architectures,” I fully agree that “In short, I think our community needs to organize and plan for a time when a coordinated effort to influence policy makers may bear fruit. We need to think about what sorts of triggering events might kick off such a campaign, and work across institutions.” But to inspire this type of participation and solidarity across the community would likely require ability to first concretely and convincingly define actual risks (impact and likelihood) - especially at a time when research and education face so many risks.