Earlier this month, an email to a security mailing list caught my eye. The email mentioned a recent article in the Harvard Business Review about the human side of cybersecurity. I had read the article as well, which focused a lot on the concept of high reliability organizations (HROs) and how the principles behind the success of such organizations could be applied to information security programs. But other than referencing HROs, the Harvard article didn't really explain them or talk about where the idea came from. I had found that unfortunate when I read the article, because HROs are fascinating creatures.
When Lance Spitzner asked if I would be a guest contributor to this blog, I knew I had the perfect topic. I have studied the HRO literature for over a decade, ever since my dissertation advisor recommended I read up on them in graduate school. And I am so impressed with their applicability to our industry that I used the concept of HROs to create one of the central models in my book People-Centric Security. The Security FORCE model is a behavioral framework that translates and applies the core characteristics of an HRO to security programs. I even call security programs that function like HROs "highly reliable security programs" or HRSPs. In this blog post and the two that follow, I'll explain what an HRO and an HRSP is and dive into some of the key characteristics that enable these exceptionally effective organizations to work the way they do.
As the Harvard article described, HROs tend to be organizations that function in highly volatile and dangerous environments. Things like aircraft carriers or nuclear power plants (the article discusses nuclear submarines, which are kind of a mix of both). Their internal systems are complex and the external forces they deal with are often hostile. When things go wrong they often do so catastrophically. Yet in many cases, these organizations have fewer accidents and less severe failures than "normal" organizations like financial institutions, retail chains, or government bureaucracies. This counterintuitive success (when more things can go wrong, and go wrong much more harmfully, you would expect bigger failures more often, not the opposite) interested researchers in organizational behavior and psychology, who tried to figure out why.
In cybersecurity, there is a lot that can go wrong. The environment is uncertain and changes rapidly; the systems are complex and tightly coupled; and the fact that the system is a social one, with lots of people interacting with technology, means that behavior is emergent and unpredictable. With more and more of our infrastructures being networked together and automated, the potential severity of a security breach also grows. Attackers can increasingly reach out of the digital world to directly impact the physical one, from cars to medical devices to electrical grids. In other words, our distributed systems look a lot like the organizations that are analyzed in the HRO research. Whether or not they are reliable like an HRO is up to us.
In my next post, I will discuss how I have adapted the HRO research specifically to security programs and introduce the FORCE Model, which defines the values of an HRSP.
BIO: Dr. Lance Hayden has spent 25 years working in information security, beginning his career as a human intelligence (HUMINT) officer with the Central Intelligence Agency. He has served as a trusted advisor to government, military, and enterprise clients across industries including finance and insurance, healthcare, retail, energy, and telecommunications. He is a leading expert on cybersecurity culture and human security behaviors. He is the author of People-Centric Security: Transforming Your Enterprise Security Culture and IT Security Metrics: A Practical Framework for Measuring Security and Protecting Data. Dr. Hayden also regularly speaks at industry conferences and contributes to security publications. He is a professor and Advisory Board member of the University of Texas School of Information, where he teaches courses on security, privacy, and the intelligence community.