Do we need a cybersecurity research program?
If we cannot answer these questions, others will answer for us.
Like many CISOs my relationship with faculty who research in the cybersecurity space has been complicated. There’s the one who wanders into your office and asks you to deploy campus-wide some system they’ve tested on 200 MB packet captures.1 Or the researcher who sent hundreds of thousands of malware laden messages across the Internet to watch the response (and got angry when we blocked the traffic). I guess I should be grateful he wasn’t studying bullets and testing them on the campus quad.
To be clear - the issue isn’t with the researcher, it’s our difficulty in supporting and participating in the research that is challenging. The most successful partnership I had was with an extraordinary faculty member who understood that we were an administrative office, and as such had certain policy (and ethical) obligations. Working with any form of student information, for example, dorm traffic, required sign-off from the registrar and the IRB, which he willfully pursued. In order to expose his graduate students and postdocs to our operational data, we were able to hire them into the office to work on projects that benefited both parties, and yet had their salary paid for by the researcher. But this, of course, was an unusual situation; a singular experience due to the specific researcher’s collaborative nature.
Given the pace and cadence of research projects, versus what I hesitate to call pace in our administrative world (perhaps glacial or even generational is the right term), how we leverage our operational activities to support the research mission was, and for most, remains a challenge. Yet I think it is one worth wrestling with for a couple of reasons. Foremost, it seems like the ethical thing to do: while I remain skeptical that from a threat perspective higher ed is all that distinct from any other entity, leveraging our aggregate experience as ‘complex entities on the Internet’ to improve the practice of cybersecurity seems like a no-brainer. Furthermore, bringing the scale and operational reality of organizational cybersecurity to research projects can only benefit that research. That is, we can help ground research projects in a way that will speed the transition to practice.
Secondly, by integrating our operational activities with institutional mission we subtly change our relationship to that mission. We ever so slightly move the needle from cost center to mission-centric or perhaps even revenue generating. Not to mention the benefits to your team: cross-pollination with researchers and tight coupling to mission can only help with professional development and staff retention.
But I want to turn to the broader question of what it means for higher education security practitioners to have a research program on cybersecurity and not merely to participate in programs led by faculty researchers. By research program I mean a coherent, long-term line of inquiry that frames and guides our work. I’m sure many of you are saying “but bonehead, er Mike, we’re not researchers; we’re hired to run cybersecurity at our campuses.” That’s entirely true. I’ve even seen some security professionals spanked for simply being “too” involved in national organizations.
This neglects that we are also part of a community, and if we’ve learned nothing else from the explosion of cyber threats over the last couple of decades, it’s that information sharing and collaboration is as essential to our function as crusts are to pies. Without either we’re left with a mess of blueberries on the floor. But I would go even further: individually we are incapable of demonstrating the kind of resilience we collectively need to demonstrate.
Imagine you’re a senator and before you are arrayed a dozen universities. Some have suffered major cybersecurity incidents, others merely work-a-day minor annoyances. There’s no clear correlation between their investment in cybersecurity or institutional scale and the severity or frequency of incidents. Now who do you ask “why are some of you avoiding incidents, and others are not?” We really don’t have an answer. There’s no place to look to see who’s truly seeing a lot of hostile activity and who isn’t. We can’t even meaningfully compare investments in cyber. If you then ask, “what are ‘you all’ doing about cybersecurity? How are you approaching it?” We’d have to respond with twelve different answers, with the usual caveats that everyone is different so we’re only representing ourselves, not the general community.
If you’re convinced that we’re all special snowflakes with regard to cybersecurity, how do you respond? With sweeping new cybersecurity regulations? By asking everyone to report to some central authority on the nature of our cybersecurity incidents? By turning to someone else who speaks with more confidence and the accoutrements of power and authority - the private sector? Perhaps you establish the equivalent of the NTSB to investigate and recommend responses to cyber incidents; for the NTSB a plane is a plane or a train is a train, is the same true for universities and businesses?
Now imagine in response to the senator we reply, “our industry has established a series of practices, based on the canonical work by NIST, that protects our contribution to the national health, wealth, and security. We collect not merely threat and incident related metrics from across most organizations, but engage in a research program of refining contemporary cybersecurity practices by tailoring them through feedback from our member organizations. While many of the functions within our individual jurisdictions are regulated by external, often Federal bodies, collectively 80% of our membership have made attestations of compliance with our body of practices. Further, we have opened access to both our metrics and our research program and data to Federal and State agencies tasked with local, state, and federal cyberdefense2.”
My hope would be that you would recognize a well-oiled machine, humming along, and that any action they would take would merely be throwing grit in its gears. That is, our own agency would engender our own agency. And thus we finally arrive at our research program.
Admittedly, the response to the senator is aspirational and rests on a number of rather difficult assumptions. It assumes we can agree on an ontology for incident reporting3; that, independent of incidents, we can create a way to characterize malicious activity directed at our institutions; that we can characterize implemented practices such that a million footnotes and caveats aren’t necessary; that we can at least correlate incidents and activity with effective practices; and perhaps the biggest hurdle, that our lawyers would allow this kind of information exchange to take place4.
This is the kind of work that would take some time, considerable participation from practitioners of IT and cybersecurity, as well as faculty who could bring analytical rigor to the work. As an engineering exercise it’s all achievable. I suspect even the lawyers could be brought along if the institutions decided they wanted to participate. In all honesty I suspect the biggest resistance would come from security practitioners - already overworked and struggling to see any short-term benefit. While it may take some suspension of disbelief, I believe many institutions would be on board. They struggle to balance the need to reduce cyber risk with the persistent ambiguity about what is actually needed. While cost pressures are real, I suspect it’s the absence of clarity necessary for decision-making that’s holding back greater investment in cyber, more than it is the actual costs.
Essentially I’m arguing that our “coherent, long-term line of inquiry that frames and guides our work” would provide that clarity to our executives. It would create a stratum of activity that is intrinsically anti-parochial, that tailors and scales cybersecurity practices by and for higher education and in doing so, would allow higher education to act not as a constellation of isolated institutions but as a civic actor with shared responsibilities and shared stakes.
As I reread this post I recognize that what I’m describing is less the research program but an intermediate step, a kind of enhanced ISAC, one that emphasizes analysis, sector coordination and an attestation framework. I see this as a precondition for the research program I’m proposing. I would hope that the core work of the research program would use the information collected to tackle the kind of practical questions we are facing: what security architectures are most effective for distributed research environments? How do different organizational structures affect incident response effectiveness? What’s the relationship between security investment allocation and breach outcomes? How do cultural factors in academic institutions affect security posture?
Earlier I referenced the NTSB as a model. What makes the NTSB effective is not that it performs investigations, but rather that its investigations lead to fines and regulatory changes. It bridges incident with context and consequences. While a voluntary, member run organization will never have the weight of a regulatory body, if we are to speak with a unified voice as higher education security practitioners, surely we should be able to answer questions such as these for our sector. Are these not the kinds of questions that would allow us to shape a higher education wide set of standards rationally and tailored to our intrinsic and distinctive nature? If we cannot answer them, others will answer for us.
For the uninitiated, campus networks generally run at 10-100 gigabits. A modest 200 megabit network trace is a rounding error.
My use of the word ‘industry’ at the opening of this hypothetical response is deliberate.
https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/cyber-incident-reporting-critical-infrastructure-act-2022-circia. I believe implementation of this was pushed into 2026. There are a host of questions about CIRCIA and similar efforts, and my comment doesn’t mean I support them. Reporting incidents as defined by CIRCIA will be time consuming and expensive, and without any clear value beyond “that’s interesting”. That’s not to say, however, value couldn’t be found in incident aggregation and reporting, I’m just not seeing it in existing Federal efforts.
Having had to complete questionnaires on our cybersecurity practices for insurers I can attest that simply saying “we do MFA” involves 3 layers of caveats. Universities are nothing if not collections of exceptions and niche demographics. Simply standardizing a response to “what do you spend on cybersecurity” might exceed our grasp.



Brilliant framing of the NTSB comparison here. The bit about lack of clarity being the actual bottleneck for investment (not just cost pressure) is something most cybrsecurity discourse totally misses. I'd add that the "engineering exercise" framing maybe underplays political economy challanges tho. Getting legal teams to open up on breach data means incentive structures have to flip first, not just technical coordination improving.