Policy by scar tissue
Notes towards a sustainable information security policy
This is the first of two posts on information security policy. This initial post is an exploration of the various tensions a solid and immutable policy needs to balance. A follow up post will provide more specific recommendations and guidance, including how to couple policy to implementation.
My first encounter with an information security policy was before I entered the field. I had been invited to a meeting to discuss whether to adopt at a system level an infosec policy that audit had written. I can’t recall why I was specifically invited (charm and good looks were certainly not involved) but the meeting was with the campus and system CIOs, the system CFO, and the head of audit. One of the campus CIOs kept pressing, “Will I have to implement this? I don’t have any resources for it.” The truth was that he really didn’t see any value in it. The policy was so lightweight it offered little or no progress over what his campus was already doing. Finally, he extracted what he wanted - as long as it didn’t make any work for him or his organization, he’d support it1. Charitably, I could call this an allergy to unnecessary bureaucracy. Less so, parochialism.
After the meeting I cornered the head of audit on why we went through this exercise; given the caveat in adoption, wasn’t this just a meaningless performance? He responded that in effect one had to play the long game. Once the policy was established, no matter how weak, you could always revisit it. That is, it was the next version that mattered.
You can see that there are at least three dimensions to this anecdote. First, that policy is a political artifact; it is less about technical controls and more about codifying “management will” and acquiring political leverage. The first version simply establishes the right to have a policy. Second, it reflects a theory of governance incrementalism, i.e., establishing a weak policy in order to have a stronger one later. Finally, it shows the conflict between authority and implementation. The CIO’s refusal to implement without resources highlights the failure of the policy to be means-tested.
This entire experience has remained with me despite happening in the late ‘90s. It was probably the first time I saw, and recognized, the political judo performed by the head of audit. But in hindsight it has me questioning why we’re stuck evolving our information security policies - shouldn’t a good policy be robust enough to last decades? Or is that simply too naïve? After all, the world changes, tech changes, and thus the policy we have guiding our path through all that also needs to change. This post will be concerned with what makes for an effective information security policy, and explore some of the dimensions we need to walk to achieve this2. Most of these aren’t challenges to be ‘solved’, but tensions intrinsic to any policy - tensions that need to be understood and balanced if we want to effectively manage them.
Foremost among these tensions is that between stability and change. If we’re to have an infosec policy that is both stable and yet contains dynamic elements to reflect a changing environment, then it seems we need to consider the relationship of a policy to the agents of implementation and governance. That is, how infosec is governed informs how the policy is written and vice versa. This is actually a pretty high bar - institutional governance of cybersecurity being what it is3. Too often ‘governance’ becomes a tool that blunts the effectiveness of cybersecurity controls, rather than one that sharpens them.
Yet cyber professionals lean heavily into Policy - big P policy - precisely because of our frustration with institutional practices. Perhaps this can be called the tension between policy and practice, or practitioner. We can’t get users to install anti-malware software with EDR, so we want a policy that says you must do so. We can’t ensure researchers are backing up critical data, so we want a policy requiring it. But of course, these aren’t really policy statements, they’re specific controls we want implemented to mitigate specific risks. If we place these statements in policy, then we immediately ensure that as soon as the risk or mitigation approach changes, the policy is out of date - a liability. And as we know, updating big P policy is not a fast process.
Compounding this is the simple fact that infosec policies tend to have such a broad scope, yet most infosec offices are more narrowly defined. This feels like a variation of the tension between policy and practice. Can you really write one policy that addresses the risks associated with research, teaching, medical services, and the myriad of administrative functions such as NCAA and PCI requirements? Maybe, but should we? In one position I held, I wrote a statement that said, ‘we have a variety of highly regulated environments, and the infosec policies for those environments are the regulations covering them, not the university’s ordinary infosec policy.’ Why on earth would I spend my time mapping federal regulations for CMMC or HIPAA, or PCI standards to my local security policy? Life is just too short4.
I’ve had colleagues argue with me about this. They tend to view controls as controls, context be damned. I’d argue that this is a myopic view of policy and controls. To return to my favorite metaphor, policy isn’t theology - much of what makes a policy effective is how well it respects and responds to context and culture. Just as an effective infosec program is more than adopting a standard and implementing it.
And what of culture, that ultimate expression of context? We often hear of the tension between culture and strategy. In an earlier post, I spent some time on an analysis that faulted Microsoft’s lack of a culture of security as the key factor in a major incident they, um, experienced5. This has me asking, should any policy attempt to address, or at least nudge, or even hint at institutional culture? Let alone directing it? Many institutions, in particular in their establishing statutes, do give voice to the centrality of academic freedom in support of a culture of exploration. In other words, how shaping minds through a liberal education requires establishing a culture of inquisitiveness, open-mindedness, and vigorous debate. Surely these are as much of our mission - creating the engaged civic culture necessary for a successful democratic society - as teaching reading, writing, and arithmetic.
If this is so, is there room in our policies to contribute to the outlines of a culture of security? If “cybersecurity is not a product or a deliverable, but a process that is woven throughout an organization, creating resilience in the culture and its products6” then it seems clear that our infosec policy has to be concerned with more than specific controls. Think about this from the perspective of scope: who would issue a policy of such breadth and who would be authorized to execute it? Your typical CISO may at best have mastery of information security, risk management, and technical and process controls, but it would require hubris of astonishing proportions to believe they’re prepared to weave resilience throughout every function in a large, complex organization. And thus we return to my earlier observation - policy must be written with an eye towards governance and implementation, and vice versa. For governance constrains policy more than policy constrains behavior.
Naturally, culture can’t be created by fiat. When I think of an organization’s culture I tend to think about relationships, actions performed in good faith, and of course decision-making. Words like collegiality, respect, and trust come to mind even where disagreement exists. I trust and respect the nurse who after applying the tourniquet pokes my vein for a blood sample. For organizations, I can only fall back on what I’ve articulated before - trust is earned and requires transparency. If our policy is a social contract, it must focus on the four horsemen of principle, authority, governance, and responsibility to maintain transparency. Transparency may not yield agreement, but it can yield understanding. For our infosec policy perhaps this means that we codify transparency and engagement in our processes. I would imagine that in practice, this is accomplished in the guidance the policy provides to governance and those invested with responsibility by the policy.
Closely allied with this question of a culture of security is the question of privacy. While I disagree with the stance that information security is intrinsically intrusive to personal privacy, it does use a number of surveillance-adjacent technologies. However, these change over time in both in substance and intrusiveness. Yet how to and whether to address this in an infosec policy is an open question. A well-crafted infosec policy should enhance individual privacy, not erode it; similarly a well-crafted privacy policy should enhance the confidentiality dimension of infosec. Both struggle with the scope question I raised earlier. Unlike ‘procurement’ or ‘teaching’ or ‘public affairs’ they operate as a foundational activity upon which all of these more recognizable functions take place. I don’t see this as a tension between privacy and security, but a shared tension they both participate in - between how they are perceived and how they operate.
I was taught early in my policy writing tenure to avoid granting individuals specific rights in policies. For example, whenever I would reference a right to privacy, I’d be chided by legal counsel, reminded that everyone was an employee of the institution, and the institution may require access to anything produced with institutional resources for investigative purposes. We threaded this needle by codifying process rights. Some classes of information (barring a court order) required notice and a timeframe for an appeal before they could be accessed7. While once in a while this was burdensome to the security office, most often everyone involved in an incident was happily cooperative with an investigation.
So we’ve seen that policy must acknowledge culture, scope, and authority - or it becomes dishonest, yet this is quite challenging. Which is part of why I’ve usually argued that the next generation of information security policy be concise, immutable, and focused on governance, while delegating the selection and implementation of specific technical controls to security leadership.
But of course, no policy is of such theological purity; in practice, it is constructed of a number of competing, overlapping images. That of how universities (or other organizations) manage policy; another resulting from institutional eccentricities, and not lastly, the lens of whatever incident or concern gave rise to the present desire to update the infosec policy in the first place. At a meta-level, this is yet another expression of the tension of stability vs. change. Nevertheless, this last component is worth unpacking a bit. Too often I’ve seen some sort of incident unfold; the leadership is told “if only people did X this wouldn’t have happened”; and the entirely predictable response is to add this specific control to a policy, or to issue an executive mandate about the practice. While this is by no means surprising, it undermines whatever governance process is in place for information security and tends to corrupt the existing policy by inserting new, perhaps ephemeral, controls into it. Think of it as policy by scar tissue.
Essentially, I’m arguing that your infosec policy, while striving to be immutable should not need updating in response to the failure or absence of a control - perhaps not even in response to changes in the risk landscape8. Rather, it is the link between governance and responsibility and to implementation that we should look to for a response.
My position is that your big P policies (infosec or otherwise) should address only four things: principle, authority, governance, and responsibility (i.e., decision-making rights). I find, however, that the desire to add to this list is strong. For example, it’s tempting to try and address compliance and accountability in your infosec policy. I tried this in the first system-wide policy I drafted, and was quickly schooled by HR that, disciplining personnel for policy violations was their responsibility (not to mention that employment contracts, particularly for union members, didn’t allow for some arbitrary policy changing them).
Be that as it may, it’s just too common to hear complaints that infosec policy requirements are simply ignored, sometimes by units, but more often by faculty and researchers. The pull to at least attempt to address this is nearly irresistible. Why shouldn’t there be consequences for placing confidential information and mission activities at risk? Individuals agree as part of joining the institution to adhere to its policies as assuredly as the institution agrees to pay them for their time. But consider this from the institution’s perspective. Will any sanction levied itself be more disruptive to academic or business operations than non-compliance? Faculty in particular are the very engine of the mission - won’t almost any sanction disrupt that mission? The business impacts of even something as simple as disabling a compromised account can place courses, time-sensitive research, and final exams at risk. As tempting as it is to address this in policy, I’m inclined to counsel you to abandon putting issues of non-compliance into an infosec policy and simply rely on existing institutional procedures to address them9.
A few comments on what I’m calling ‘tensions’. While I like the metaphor, it does tend to present as one of binary oppositions. Whereas these are not zero-sum choices but a kind of tradespace in which multiple options can be evaluated against institutional risk, authority, and capacity. Effective policy occupies a narrow corridor between extremes.
Rereading this post I find myself asking if there’s a simple taxonomy to the various tensions I’ve identified. Foremost among them must be stability vs. change. We want a stable policy, one that doesn’t need to be updated every six weeks, despite the frenetic pace of cybersecurity threats and practices. Addressing this goes to policy vs. practice, scope vs. capability, and governance vs. implementation. That is, stability in policy requires us to articulate how change is managed and by whom - as I described it, governance and responsibility.
A second major tension runs parallel to stability vs. change: culture vs. strategy. Can policy shape organizational culture, or only reflect it? This question encompasses many of the tensions I've discussed - scope vs. capability (CISOs lack authority to shape institutional culture), policy vs. practice (mandates don't create norms), and privacy vs. institutional authority (culture determines how surveillance is perceived). The fundamental question: can policy help make cybersecurity and privacy shared institutional responsibilities rather than isolated technical concerns?
Of course, taxonomies simplify reality and can be too reductionist if misused. I see their value as sensemaking tools, not predictive models. Unfortunately, drafting policy requires working from inside the context both of the organization and that permitted by the organization. In my next post I’ll step back from this philosophizing and turn to more practical recommendations for an infosec policy and the attendant governance and implementation of it.
Truthfully his position is quite rational inside the incentive structure. A structure entirely framed by the campus with little or no connection to the system.
Much of my current thinking about policy stems from my time in the UC which has long suffered from a truly dismal information security policy. While I won’t speak to it directly, you can view a presentation I gave discussing it, and a solid alternative approach.
Infosec governance tends to be a work in progress, evolving as different philosophies of governance come and go. Worse, it is the rare institution that grants budgetary control to a governance committee. I’ve experienced this at every institution I’ve worked at. We establish a governance committee but the real rubber hits the road in the smaller meetings with the CFO or Provost about what gets funded. Which always reflects those individuals’ personal predilections.
When discussing this with a colleague he demurred, saying the PCI standard wasn’t rigorous enough for his liking and thus he wanted to impose his campus policy on PCI environments. I appreciate the thoughtfulness but believe this was diverting valuable cycles towards minor issues.
By ‘experienced’ I mean ‘inflicted on the global economy’.
To be clear, this was only applicable where faculty were involved. Staff (both academic and administrative) were generally treated differently. With the advent of EDR tools which provide some endpoint forensic capabilities maintaining this notice period is problematic. Perhaps there needs to be a distinction drawn between items needed to respond to an incident, and other forms of information, such as email or files.
I can’t tell if this statement about changes to the risk landscape and policy immutability is deeply insightful, or naïvely absolutist. Definitely something worth thinking more about.
I did manage to at least get sanctions mentioned in a policy. It’s tempting to argue that while I’m waving my hand at this issue, it may be the pivot around which establishing a ‘culture of security’ rests. If our leadership devalue cybersecurity policy and practice compliance compared to faculty autonomy or the fear of faculty flight, haven’t we already lost the battle?


