Cybersecurity requirements for research
Some recommendations vis a vis NIST 8481
As I wrote about in an earlier post, NIST 8481 isn’t the document we were all hoping for - a prescriptive description of the cybersecurity requirements for research security programs. Thus we find ourselves with regulations that require ‘alignment’ with 8481, but it offers little or nothing to align with. The path to this problem isn’t hard to follow - 8481 began as an exploration for additional work: what are the challenges universities are facing in securing research programs? But once it was codified by the Chips and Science Act, the die was cast and here we are.
Given that this appears unlikely to change anytime soon, I had hoped (while I was still at NSF) that as the lead agency for research security, NSF would publish an ‘understanding’ of 8481, with which to guide universities in establishing the cybersecurity elements of their research security programs. Indeed, I was sitting down to draft this when the destruction of NSF began. What I’d like to do in this post is provide a sketch of what I was (and am) hoping we’ll see as far as guidance for universities and agencies as they prepare to require and evaluate research security programs.
Let’s begin by stating my fundamental assumptions.
Federal agencies are neither prepared for, nor resourced to evaluate the programs at each university. The volume is too high, and there will simply be too many local variations in practices and documentation.
Given the heterogeneous nature of institutions, we should provide universities with as much agency as possible in designing and operating their research security programs.
Given federal agencies' resource limitations and the heterogeneous nature of institutions, it’s safe to conclude that schools should be asked to attest to their compliance, but also warned that they could be asked to provide documentation to that effect should the need arise. For example, in response to an investigation stemming from a significant event or whistleblower complaint.
While it seems self-evident that some measure of institutional policy is required (imposing cybersecurity practices on researchers), the documentation we recommend for universities to prepare should include a description of how their program operates. That is, policies are often aspirational, whereas practice describes how things are accomplished.
While many schools may choose from a variety of standards and frameworks, or create their own, these must be mapped directly to federal standards and frameworks. This goes to the heart of what it means to ‘be aligned’ with NIST 8481. As with #1, it is only reasonable to expect the federal government to measure against, and maintain expertise in, federal standards. If a school chooses to ignore federal guidance, then the work of showing their choice is equivalent falls to the school.
We should strive to avoid creating a HIPAA-esque bureaucracy for research cybersecurity. By that I mean a burdensome level of controls that provide minor or no additional risk reduction, but are focused on the oversight process itself. While I anticipate that over time the extent of what is a ‘sufficient’ research cybersecurity program will increase, we should avoid starting by asking for too much. We need to recognize that the sheer scale and diversity of research, coupled with the lack of historical investment represent difficult hurdles. If the initial implementation requirements are too heavyweight, the program (and institutions) will fail.
This moment is an inflection point for higher education, research programs, and cybersecurity. If we fail to embrace it we will have not only lost a historically unique opportunity, but we will open the door for a 'reasonableness-blind’ imposition of regulations by well-meaning, but frustrated regulators and legislators. While part of what we want to do is protect universities from burdensome requirements, we must impose some truly effective level of cybersecurity controls on all research programs.
Building on the principle of imposing truly effective controls, we should require a minimum number of these, primarily ‘hard’ practices (such as MFA), but also include a mix such that research programs get ‘credit’ for institution-wide practices (such as network intrusion detection or vulnerability scanning).
Though the certifications of compliance are institutional, this is an opportunity to simplify answering “are we compliant” by requiring that faculty and researchers attest in every grant application that they are compliant with their institutional research security program. Whether cybersecurity is itself called out, or if we reference the broader research security program is a discussion point. Personally I’d be inclined to call out cybersecurity in some fashion, since for many faculty and researchers, this will be novel.
With that spelled out, let me try to define what a program meeting these might look like. An effective but lightweight program will be described using three elements: an Information Assurance Management Plan (IAMP), a modest set of cybersecurity controls, and an attestation by every researcher or faculty of compliance1.
An IAMP is a short document, describing the elements of your research cybersecurity program. It has a number of required components.
A statement of your cyber risk management strategy
A cybersecurity framework (e.g., the NIST CSF Framework)
A baseline cybersecurity control set (mapped to NIST 800-53)
Scope and Boundaries
Roles and Responsibilities (including research affairs, the security office, and researchers)
Governance and Oversight (likely a set of committees)
Information Assurance Program Operations (enumerates and describes core processes, functions, and responsibilities of the research cybersecurity program)
An assessment plan (how you plan to monitor if your program is functioning as stated).
It may seem odd to begin by defining the required documentation of your program, rather than your program itself. I suspect most cyber professionals will want to start with defining policy requirements, from which flows the shape of their program. You’ll note however, that the required elements of an IAMP actually do define the program quite nicely, but in keeping with my general inclinations, it leans into practice over policy.
The reason I’ve chosen to start this way is twofold. On one hand it provides broad freedom for the schools to define their program, but in addition it normalizes what the documentation of said program should be. While some of the IAMP elements will be highly idiosyncratic to the institution (e.g., governance and oversight), cybersecurity operations tend to have a lot of similarity as to function (for example, network intrusion detection or vulnerability scanning). Every cybersecurity program will perform these sorts of activities in some fashion. It seems overly prescriptive for an external agency to require every specific functional activity.
As to the responsibilities detailed in the IAMP, this may be the most important component to clearly document. While it’s likely that the majority of the program operations will be handled by a university’s central cybersecurity shop, there will be some elements that fall to research lab managers or staff. Such as ensuring anti-malware software is deployed where appropriate within a lab. It is impossible to address accountability without defining responsibility.
The baseline cybersecurity control set will undoubtedly be the most contentious recommendation I can make. Every cybersecurity professional will have their own sense (either from intuition, experience, or analysis) of which are the most effective and smallest set of controls that should be required. My recommendation is to adopt the set of 14 controls listed in §5.3.6 of the Research Infrastructure Guide (RIG) “Critical Controls”, summarized in table 5.3.6-1. A word on the creation of this list. During my time at NSF, I reviewed all documentation I could find of significant cyber incidents at the major facilities. While doing so I took note of what security controls were implemented by the facility suffering the incident in response. Presumably these were controls that would have mitigated the incident in the first place. What was striking was the uniformity of each response. I aggregated these and found fourteen that were consistently implemented. Thus the list was created.
If you review the list, you’ll note that these are quite basic controls. Most are what I call ‘hard’ physical controls, while only five are process controls (such as inventory maintenance). Individually they’re all fine and defensible. The question is, are they the best 14 controls to require of research programs? Let the religious wars begin.
Beyond the assumptions listed above, I think the other critical question we should ask of these controls is whether, or to what degree, any of them will interfere with any specific research program2. I don’t think so, but it’s worth remembering that cybersecurity is supposed to be flexible and adaptable. If a specific control disrupts the science or scientific workflow, then it should not be implemented. It’s important to not lose sight of this, as often happens when imposing standards or when auditors get involved. Remember, the goal is always to lower risk to an acceptable level, not blindly implement standards. The standard is not the goal.
This control set checks a number of boxes for me: it’s short (and possibly prunable), should not interfere with research workflows even with broad implementation, leverages institutional processes wherever possible, and is derived from incidents in research facilities. The RIG also provides approximate mappings to NIST control sets and thus can reasonably be seen as a filtered view of NIST recommendations.
Finally, and I won’t spend much time on it, I’m inclined to suggest that every proposal submission an institution processes should require the submitters to attest that they are and will remain in compliance with their institutional research cybersecurity program. Putting this into place will obviously create some friction. Some researchers will insist the program is unreasonable, and many more will justifiably ask, ‘how do I know if I’m in compliance?’ There will also be institutional concerns - if a university has 500 faculty applying for grants, and 5 refuse to assert their compliance, is the institution still able to certify? If one faculty member is dishonest and asserts compliance but then suffers a breach due to non-compliance, who suffers any sanctions?
We need to bring commonsense to these questions. A large institution shouldn’t be penalized for the behavior of some tiny portion of its community. By requiring the individual to attest to compliance we’d be putting a stake in the sand for cybersecurity. Responsibility for attestation (“to the best of our knowledge”) falls to the institution, but liability for failing to implement local security controls must remain with the researchers.
Let’s see if my program recommendations align with the goals and assumptions above.
Federal agencies will not have to review the program of each institution, but merely accept their attestation of compliance.
While the documentation for each program will be consistent, each school will be afforded control over the design and implementation of their program. However, if a situation arises that requires evidence, agencies will receive the same form of documentation from each school. This implies they will become familiar with practice norms and their evaluation.
Schools will only need to attest to compliance and not provide evidence in advance.
The IAMP is by definition a runbook for the research cybersecurity program. While it may include a pointer to relevant policies, it meets assumption #4 by focusing on practice vs. policy.
Schools will have to map whatever framework or controls they choose to NIST equivalents. This simplifies the requirement for both schools and agencies. Thus a federal standard for federally funded activity. Yet schools will be able to tailor their programs as appropriate. Essentially we’re codifying that ‘alignment’ means each element of your program points to identical or equivalent elements from NIST and are thus mappable and achieves the same risk reduction goals.
By reducing the number of controls that are required, and minimizing the documentation requirements, we have achieved nirvana, a minimization of bureaucracy.
By requiring concrete cybersecurity controls, and mapping to federal standards, we both move the needle on research cybersecurity practice, and demonstrate to legislators that higher education is serious about cybersecurity. We avoid the ambiguity of simply stating, “we secure our data”.
As I described, the controls listed in the Research Infrastructure Guide are practical, recognized as effective, minimal in number, and driven by experience.
By requiring researchers to assert compliance, we include them in the liability chain and apply pressure where it formally has not existed.
Implementation of such a program, as minimal as it is, will still be challenging for most institutions. Most schools simply underinvest in supporting research programs, and most central IT shops are barely aware of the breadth of research activities on their campuses. Even relatively simple controls such as MFA are probably beyond the capacity of the small research lab. They’ll need assistance moving their authn/z to leverage campus solutions which probably have already integrated MFA. As mentioned earlier, data backup is fraught with challenges - from understanding what truly needs robust backups to simple matters such as who pays for desktop backup. Few labs truly segment administrative accounts from ordinary accounts, since it’s easiest if everyone is root. Of course, given that no one is prepared to collect and monitor all system logs from research labs, universities will need to decide what level of investment is warranted. All of these will require some investment of resources, particularly in the near term, to help smooth the transition to a new status quo.
In closing I want to return to my belief that this moment is an inflection point for higher education, research programs, and cybersecurity. I really do understand the trepidation schools are feeling as they contemplate the challenge of a research cybersecurity program. This is largely an unfunded mandate, though it’s worth stating that many if not all of the major funding agencies are more sympathetic to including cyber costs in proposals than ever before. Including line items in proposals for data backup, funding for a partial security FTE, or fees for cybersecurity software is increasingly viewed as a positive feature of a proposal, rather than an unnecessary cost. That is to say, it will increase competitiveness, not reduce it.
If we miss this opportunity to introduce a fundamental change in how we view cybersecurity within the research ecosystem, I suspect the long tail of that lost opportunity will be far more expensive and problematic than anything we’re considering now.
The program described here draws extensively from §5.3 of the forthcoming Research Infrastructure Guide (RIG) by NSF. Many of the goals of the RIG are the same and they steered the components described there. As of today, the RIG referenced is a draft, awaiting publication with only minor changes. I suggest reviewing it for a longer exegesis on each element.
Some of these do come with costs, most specifically the data backup requirement, NSF-7. While in my own experience, many or most faculty do some limited data backup, the majority of these are not sufficient to protect them from a ransomware attack, or malicious lab member. Institutions will need to resolve cost issues such as this on their own, or attempt to include some backup costs into grant proposals.


