Security Awareness Blog

Human Behavior Modeling - #SecAwareSummit

GeordieEditor's Notes: Geordie Stewart is Principal Consultant for Risk Intelligence and one of the speakers for the upcoming EU Security Awareness Summit in London on 10 July. Geordie is an international speaker on the topic of security awareness and writes a regular risk communications column for the ISSA international journal. You can learn more about Geordie at his blog. Below he discusses his talk for the upcoming EU Security Awareness Summit and what you will learn from it.

The ?Human Operating System' is a great concept to help us think of humans as the eighth layer of the OSI model. It reminds us that security is an ongoing process and that different layers will require different maintenance tasks to be performed routinely in order to stay acceptably secure. It also reminds us that no layer will ever be 100% secure so pointing out control failures in layer eight does not mean that the effort trying to secure it was a waste of time.

While the eighth layer analogy is useful to communicate the role of security awareness as an ongoing maintenance task, it also turns the spotlight on our approach as awareness practitioners. When it comes to patching other OSI layers there are a multitude of formal frameworks, tools and methodologies which are in widespread use to help drive effectiveness and efficiency. We have the Common Vulnerabilities and Exposures (CVE) framework to rate vulnerabilities in a consistent, standard, repeatable way. We have tools like the Microsoft Baseline Security Analyser (MBSA) that we can use to quantify missing patches on Windows operating systems. But when it comes to security awareness and the human operating system we have little in the way of frameworks, tools or methodologies. Instead, we rely largely on gut instinct and traditional approaches which have been around since before the internet. But is this working, or working well enough?

It would be inefficient and ineffective to invest resources in installing patches on operating systems without knowing if they were needed or not, or how they might interact with existing system components. We might all agree that the Sasser worm patch (MS04-011) is important to apply, but that doesn't mean that it would be wise to try to install it on every system regardless of if it applies or has been installed before. But that's pretty much what we do for layer eight patches such as password complexity. We install ?patches' on the eighth layer without an understanding of if the particular patch was required, exactly what effect it will have or whether it was the ?right' one to install given the resource constraints that exist in every organisation. Just as we would be wary of someone telling us what patches were needed for a system they hadn't even looked at, we should also be aware of the folly of pursing a strategy that amounts to the blind patching of layer eight.

So how do we improve the targeting and benefit of layer eight patching? In my session we will look outside the information security field to find frameworks, tools and approaches relevant to risk perception that can be leveraged for the benefit of the information security field. Taking a user centric approach, we look at frameworks such as the Extended Parallel Processing Model (EPPM) to understand how layer eight processes risk advice and the factors that affect the likely outcome. Then, we will examine those factors using Lewin's Force Field Model to give security awareness practitioners practical advice on how to adopt a more consistent, repeatable approach to optimising their layer eight patching.