The organizational world model
Are we raging against the wrong machine?
Fundamentally we live in an illusion. We view the world, highly colored and refracted through the lens of our own experiences and biases, and imagine it working according to a certain logic: a kind of physics directing the forces around us. Our reactions, like raising a hand to catch a ball thrown towards us, reflect how we have trained our minds on the perceived reality of that physics. I suspect that in the modern AI parlance we’d call this a world model.
The same is true for our professional lives. We go through our days believing we understand cause and effect, and thus tailoring our actions to create the desired effect. We think we understand the physics of our environment. Yet is there any organization, anywhere, not populated by people who are frustrated by the poor decisions of their managers and coworkers: “this would be such a great place to work, if only….”, or “don’t they understand how much risk we’re taking on by…”. Perhaps we are best described as homo querulus and should consider the sapiens as aspirational.
For those of us in cyber, who try to use the calculus of risk in creating our organizational world model, this frustration is made worse by our belief that we’re driven by data. We forget that all our risk measures are intrinsically speculative. If a system is breached, or should data be exposed, or when a system is knocked offline. We measure twice, and cut once, if we’re lucky. Too often we measure a hundred times and the organization never cuts.
It’s easy for those of us raised in a privileged world of technology, liberal democracy, social justice, and of course, scientific progress to forget how flawed human nature is. We are not truly rational creatures; humans are best described as beings that can hold two opposing ideas at the same time and fully believe and act on both. While science marches forward, the challenges we face have little to do with the misuse of its products. Rather, the world is torn asunder, as it always has been, by men suppressing women, religions jockeying to see how quickly they can murder each other, and greed, venality, and mendacity shaping our politics. There is simply no evidence in the historical record that we, the elements of society, have ever changed. Society has just erected porous and flimsy barriers to hide our basest instincts from view. We have always valorized wealth, and pathologized poverty.
I’m taking this particular off-ramp as I reflect on conversations I’ve had with colleagues and mentees on “why can’t my management take infosec seriously?” The frustration of watching your leadership shoot themselves in the foot on cyber matters is heartbreaking1. We all do this, not just the newbies. We see our bosses (and their bosses) accept (what we feel is) too much risk, whether deliberately or through some combination of arrogance and foolishness. Projects and services are spun up with an emphasis on delivery leaving cybersecurity and privacy to play catch up.
Typically I’ve steered these conversations to focus on communications. As a field we’re really not that good at translating cyber risk into a language our organization understands. I suspect this is true across all industries, though I have to imagine that in the private sector, which has a more mature understanding of the connection of risk to profit, the task is easier2. Be that as it may, it makes sense that if your management isn’t hearing you, you’re probably speaking the wrong language. I know I’ve seen this play out a thousand times as an executive’s eyes glaze over in response to discussions of unpatched java installations.
And surely, there’s always some element of this communications failure in effect. But as I reflect more on the question, I wonder if what we’re really running into is a failure to recognize how organizations work3. Organizations, as social constructs, are shaped more by human nature than by logic. Every push for a specific product line, or marketing initiative places individuals and relationships into competition for attention (where attention => resources). Every time we, as cyber professionals bring forward a new initiative, we pretend we are in a quantitative conversation - a physics of risk. Yet against the broader backdrop of how every organization operates, we’re not in a discussion about risk. It’s a competition for resources and most of our competitors are bringing an entirely different calculus to bear, often one of incentive for the leadership.
What I don’t want to do is reduce this to merely a clash of competing priorities and incentives. That’s a given and we all understand that. No, I want us to recognize that what we see as a failure to understand our argument, is in fact, our own failure to recognize how organizations operate in ways reflective of human nature. I’m trying to avoid sounding too cynical about this. Consider it an observation and not a criticism. When it comes to human nature, I subscribe to the Popeye philosophy: I yam what I yam.
A company develops a product. It does well, they evolve or multiply it. It does poorly, they modify or dismiss it. Even public universities operate this way. They invest where there is demand or success. They may speak in the words of quaestio sancti, a purpose that may have birthed the institution of higher education; but do they commit to the common good of society, or is that merely window dressing to tell stakeholders4? Those of us who step back and look at the history of intellectual progress, we see the connections. We see the threads, how they permeate society, how the most obscure and scarcely understood eddies of research are the stepping stones of progress5. From the organizational perspective, however, we’re athletic teams and hospitals, with a smattering of academics surrounding them.
More plainly: we perceive and imagine that our organizations work in predictable, model-able ways. That our logical arguments, built from both received knowledge and experience, are the tools we need to use to exert organizational leverage. While this isn’t wrong, to depend on that alone will only lead to disappointment. Thus we end up sitting at the bar muttering “if only” and wondering what more we can do.
So given this perspective, where does information assurance fit in the organizational zeitgeist? How does human nature get woven into our planning, folded into the world model of the organization? To abuse the physics metaphor a bit longer, it seems that human nature is the dark matter of our universe - we recognize its gravity but can’t actually put our fingers on it.
I don’t think I can fully answer this - the primary reason I write is because I enjoy the exercise of working out issues - and this one seems to warrant further exploration. However, as a salve for the frustrated soul, I can call out a few specifics.
Leadership in most organizations is finely attuned to a kind of parsimony of resources. We love to complain that our executives don’t grok the importance of cybersecurity. But in the modern world, dominated by late stage capitalism, do we need more than ‘just enough’ cybersecurity? Just enough for most organizations isn’t ‘lowering risk to a minimally acceptable level’; instead it’s just enough attention to soothe either anxiety or a recent wound.
Organizational memory is a fickle thing. Some of it is encoded in policy (as a response to an event), but much of it resides in people. New leadership may understand the risk of a cyber incident, but will lack the emotional valence of those who lived through it. People are busy, institutions are complex, and without regular reinforcement that emotional valence will fade for even those who did experience an incident. This is part of the unstated value of recurring tabletop incident exercises6.
Control, agency, and autonomy are intrinsically in conflict with one another. Tracing the lines and boundaries of each is far more insightful than org charts or titles7. These three have a high friction - a strong grip - on emotions and thus human nature. We all recognize the territoriality of authority in our organizations, but there is little more you can do to kick the hornet’s nest than to overlook how control, agency, and autonomy are perceived by others. I find it fascinating how few organizations formally examine the nature of services (security services in particular) and what partnership means in highly distributed environments like universities. Almost all struggles that get labeled ‘turf’ or ‘politics’ come down to control, agency, and autonomy not being understood or codified.
In the final analysis it may be that cybersecurity nirvana is itself an illusion we imagine is just over the horizon. In our ‘real’ world, everything from product development to protecting customers’ or clients’ data is completely enthralled to the articulation of human nature in organizational politics and behavior. Updating our world model to include this seems like it should help us as practitioners of information assurance become more successful. Though, even if we recognize (if not accept) the human nature of our organizations I find it impossible to stop complaining about it.
Let’s be fair, what I’m describing is by no means restricted to professionals working in information assurance. But that is the community I know best.
I’ve made this observation before, and I suspect it’s more a ‘grass is greener’ thing to say than any statement of fact.
I’d like to say “and more specifically, how American organizations work” but I can’t speak from experience there. It would be hard to detangle the mishegas of our uniquely American post-capitalist society and its global reach from any specific national style.
As I like to point out, noble causes and strategy make for great copy. But look at budgets if you really want to understand what an organization’s purpose and priorities are.
“Such developments further pave the way for precision measurements of fundamental physical constants, providing a new approach to address long-standing fundamental physics questions. Examples include BEC [Bose-Einstein Condensates] studies under microgravity conditions on the International Space Station30 and the demonstration of a cold atom gyroscope in the China Space Station, with current plans for large-scale cold atom interferometers. The field has now matured to include commercial BEC products and remotely-controlled BEC experiments for educational purposes31.” https://www.nature.com/articles/s42005-025-02195-x.
It’s an interesting question to ask: how do I build this reinforcement into my tabletops? If it’s too explicit it’ll seem artificial, too subtle, it won’t be noticed at all.
Org charts and titles may be imbued with some limited power, but they largely remain proxies for actual authority.


