Security Awareness Blog

The Rational Rejection of Security Advice - A Rebuttal

Recently Cormac Herley of Microsoft Research released a whitepaper titled The Rational Rejection of Security Advice by Users. The paper discusses the cost issues of awareness training and education and includes a cost analysis of three awareness topics. He then documents why he feels these areas are not cost effective and questions the value of awareness programs. After reading the document I wanted to share with you some of my own thoughts. On some parts I agree with Mr. Herley, on some parts I disagree and some I feel he is just dead wrong. The biggest difference between Mr. Herley and me is I am far more optimistic about awareness and education. Below I explain why.

    • Anlysis: Where I disagree with Mr. Herley's is his analysis. Most of his cost analysis is very narrowly focused on specific attacks, forgetting that the very same education that helps protect against one attack helps protect against many other attacks For example, his first analysis focuses on compromised bank accounts, stating that consumers have minimal costs as they are simply reimbursed by their bank. As such, he states that awareness and education to protect your bank account is not worth the cost to the individual (this is assuming you actually get reimbursed). What he fails to mention is that the same awareness and education can help protect against many other attacks, such as credit card fraud or identity theft, which costs in America alone an additional $50 billion dollars (2010 Identity Fraud Survey Report, Javelin). The costs are much higher, and the value to the end user much greater, when you combine different attacks together. What he also fails to mention is the costs to organizations. If you are a business and your bank account is hacked, you do not get reimbursed, your organization has to pay for those losses (read Brian Krebs for an outstanding series on this exact issue). Even more damaging is cost of lost intellectual capital. Last year the Federal government issued a report that stated that, between 2008 and 2009 American business losses due to cyber attacks had grown to more than $1 trillion worth of intellectual property (Financial Management of Financial Risk, ANSI 2010). All the sudden those costs look far more damaging, and awareness and education becomes that much more valuable.
    • Topics: Then there are the awareness topics Mr. Herley chooses for his comparison. He goes on to point out that passwords, analyzing URLs and SSL certificates cost more then they are worth. You know what, with SSL certificate I have to agree with him. This is a painful and time consuming topic with little return on value. Heck, even I get confused with SSL certificates. So yes, the cost is greater then the return, in fact I bet many other security professionals would agree. This does not make awareness invalid, it just demonstrates there are far more effective topics to focus on. For example, instead of teaching SSL certificates to identify a MITM attack, awareness should focus on the more valuable topic of how not to get owned in the first place. The key to ROI for awareness is not teaching specific attacks (which are constantly changing), but focus on more general concepts that apply to multiple attacks (keep your systems and applications updated, explain the concepts of social engineering, no that anti-virus site it not real, etc).
    • Costs: Mr. Herley then goes on to say that everyone pays for education in time they allocate, but only the victims have costs. This is simply wrong. We all pay for cyber crime, not just the people who have their back accounts hacked or identities stolen. Banks are loosing millions in financial fraud. How do they recover? They increase banking fees to their customers. Online merchants are loosing millions to credit card fraud. How do they recover? They simply raise their prices to cover the costs. In the end we are all paying for these successful attacks.

    Overall, I feel the paper takes an overly negative view of security awareness and education. He underestimates the costs of attacks by narrowly focusing on a few specific types, not including the combined costs of other attacks. Also several of the awareness topics he focuses on are generally acknowledged as having limited education value. That is kind of like picking on the smallest kid on the block to prove your point. But he does teach a good lesson. We have to prioritize what we teach people, make sure what we teach people will make the biggest difference, and do it in a cost effective manner. This is not easy, but then again that is what I hope to achieve with this blog ... making a difference when it comes to securing the human.

    7 Comments

    Posted April 5, 2010 at 11:17 AM | Permalink | Reply

    Paul E. Black

    An additional cost is disruption. I buy accident insurance to minimize the (economic) disruption, not because I expect to make money in the long run from insurance. I often take a little extra time to be a little more secure to reduce the chance of my life being interrupted by some security breach.

    Posted April 7, 2010 at 3:49 AM | Permalink | Reply

    Gary Hinson

    There's *so* much more to awareness of security threats than both you and Mr Herley appear to imply from this piece, Lance. For a start, what about "irrational' rejection of security advice? Or the plain ignorance of security advice. The confusion that comes from conflicting advice. The pressure of other/conflicting priorities and concerns. The boredom and careless disregard that comes from over-familiarity with those all-too-common security error messages. The "it'll never happen to me" perverse ''rationalization' that discounts so many genuine, commonplace but mundane threats relative to the headline-grabbers (like the outrageous toll of road deaths versus the extremely rare effects of terrorist attacks) ''. I could go on but I'll simply tip my hat towards Bruce Schneier's frequent writings on this topic.
    There's yet more to security awareness than understanding and appreciation of the threats, or better still the risks. Knowing that there is something dark and unwelcome Out There is not in itself enough to prevent unpleasantness. The next two steps are: (1) understanding what one might personally do to mitigate the risks; and (2) actually taking those steps. I distinguish them because awareness, taken literally and alone, achieves nothing. I may know that opening unsolicited or dubious email attachments is probably not safe, but unless I actually avoid doing so, my system is pwned and my privacy is still toast. And even if I avoid opening dodgy email attachments, that's not enough to avoid all forms of malware, and is certainly insufficient to protect me against the whole panopoly of frauds, hacks, scams and so forth.
    Oh and by the way, most of us experience far greater losses as a result of self-inflicted damage and plain accidents than we ever do from deliberate attacks. Addressing general carelessness and casual typos is probably a more valuable goal for an awareness program than, say, choosing decent passwords.
    I'm happy to recommend Rebecca Herold's outstanding book on security awareness for a more rounded and complete exposition on this, and best of all it is packed with pragmatic suggestions on how to address the issues through more a more intelligent approach to security awareness.

    Posted April 8, 2010 at 9:36 AM | Permalink | Reply

    lspitzner

    Gary, I'm especially intrigued by your post about self inflicted damage. Any examples of what you think are the top topics in this area, any examples of what we should be making people aware of and how?

    Posted April 9, 2010 at 9:28 AM | Permalink | Reply

    Cormac Herley

    Lance,
    Thanks for the comments. I sincerely appreciate it when anyone takes the time to engage with something I've written, especially when it's thoughtful and respectful as your comments are. Thankyou. Having said that, let me just say why I disagree with your rebuttal.
    Analysis: certainly, in looking at end-users, I chose the simplest case I could find. The costs, benefits, and decisions all belong to the user. Some of the lessons carry over to enterprise space, but I thought I was picking a big enough fight (for now) focusing on consumers. A trillion dollars is indeed a lot, but A, I don't believe it, and B, how much of this can be fixed by choosing strong passwords, reading URLs or paying attention to cert errors? Javelin says that ID theft was $50 billion in 2009 (though they want $3000 to see the report). The FTC in 2007 put it at $15 billion not $50 billion (report at http://www.ftc.gov/os/2007/11/SynovateFinalReportIDTheft2006.pdf) and the median out of pocket expense per user was $0, and the median time spent was 4 hours. Again, I would challenge you to demonstrate how much user pain and expense can be reduced by the advice we offer. Users have voted with their feet on this, and it's worth trying to understand why. If wafting big n!
    umbers into the discussion could convince users we'd be done by now.
    Topics: We're actually in agreement that user attention is far too valuable to be wasted, and prioritization is key. I'm just unable to see an argument where reading URLs, or very strong passwords are a good use of time. They have benefit, but I don't see the analysis that shows it's greater than the cost.
    Costs: I completely agree that banks pass the costs back to users. Yes, we all pay for cybercrime. But expecting users to make individual effort to achieve a social good is a tough sell. A user who chooses a really strong password, reduces by an unknown amount the unknown probability that his account is hacked, and maybe decreases marginally the fraud cost that the bank passes along to everyone. The world where people behave like that is galaxies away from the one where I live.
    It may not sound that way, but I think we agree on much. I couldn't agree more that prioritization is key. You're 100% correct that we must focus on non-ephemeral stuff. And "make sure what we teach people will make the biggest difference, and do it in a cost effective manner." Preach it brother!
    Cormac

    Posted April 12, 2010 at 6:59 PM | Permalink | Reply

    Gary Hinson

    Hi Lance.
    By ''self-inflicted damage', I was referring to those little incidents/accidents that affect all of us some of the time. A lot of them count as integrity failures (typos being a classic exampel), some are availability failures (deleting the wrong file, shredding the wrong paper, losing stuff generally) and a few are confidentiality failures (disclosing secrets to friends and family, or even worse on the Web). Incidents like these are easy enough to think up (or recall!) and, since they are personal and resonate with most people in the audience, they help explain similar kinds of incident that affect organizational information assets. That makes them worth weaving into the awareness materials from time to time.
    As to how to make use of them, here are a few suggestions:
    ''" Drop them into newsletters, briefings etc. as examples;
    ''" Expand them into case studies for class discussion;
    ''" Create scenarios along these lines and use them to pose questions to assess understanding;
    ''" Mention them in security seminars and the like, for instance asking whether anyone in the audience has ever accidentally pulled the wrong plug and taken down a computer system.
    Regards,
    Gary

    Posted April 16, 2010 at 2:05 PM | Permalink | Reply

    Eric Peterson

    Thank you Cormac for having the common sense to debunk decades-old password mythology. Back in the 80's there was a problem with password cracking. That was solved IIRC by about 1990 with password hash file protection.
    But what if a password hash is unprotected? Then the system is badly designed and should not be trusted for any use. What if the passwords are stored in cleartext (even within an encrypted database) so they can be sent to back to the user? Bad design. What if the system has no lockout mechanism to protect against dictionary clients? Bad design. There are in fact no other reasons to require passwords of any strength greater than 4 to the 26th or 6 to the 10th power that are not in fact cover for bad design.
    So why do we burden user with strong passwords to protect against bad system design? So they can go through all that trouble and then have their credentials stolen anyway because of other bad design? Then there are all the other attacks like keystroke logging that Cormac mentions making strong passwords completely moot. One he didn't mention was the cleartext password channel. Such obvious bad design is a good reason for rule 7 (don't reuse passwords), but since the strength argument is moot, rule 7 isn't as big a burden.
    While user password burdens are my biggest pet peeve, I would also point out the problem of external costs to developers in the security space, particularly for things like certificates. It is very easy when writing client or server software to accept a certificate rather than check for revocation or even check the chain of trust (although some of that is more automated than a few years ago). It is also easy to sidestep the code audits that would detect such flaws. It is much harder to check properly and to review code to make sure it does check properly.
    Unfortunately it is much easier in both of the above cases to believe we are secure. Many users will willingly go through the password exercise because they think they will be more secure. Any many developers will have similar misplaced trust in security panaceas and their own thoroughness.

    Posted December 20, 2010 at 6:25 PM | Permalink | Reply

    HJohn

    @Eric Peterson
    Good points about bad design. In my experience, a poor implementation is the culprit more often then security parameters or encryption.
    To cite just one example, the "Evil Maid" attack against TrueCrypt encrypted hard drives. Many may have thought the encryption was broken, which wasn't the case. They used physical access to the computer and exploited the implementation to log key strokes. Encryption was never the weakness.