Over at Word to the Wise, Laura Atkins has a post up where she talks about the real problem with ESPs and their lack of internal security procedures which resulted in the breach of many thousands of email addresses (especially Epsilon).  However, Atkins isn’t only criticizing ESP’s lack of security but also the industry’s response wherein they have suggested countermeasures that are irrelevant to the problem.  Better spam filtering, signing mail with DKIM and securing consumers’ machine are all besides the point – the problem is that ESPs do not have security policies in place that would have prevented this problem.

As I read through the comments on the blog, what Atkins is getting at is that Epsilon should have implemented policies where even if they were breached by a third party, it shouldn’t have mattered:

ESPs must address real security issues. Not security issues with sending mail, but restricting the ability of hackers to get into their systems. This includes employee training as well as hardening of systems. These are valuable databases that can be compromised by getting someone inside support to click on a phish link.

Not everyone inside an ESP needs access to address lists. Not everyone inside an ESP customer needs full access to address lists. ESPs must implement controls on who can touch, modify, or download address lists. These controls must address technical attacks, spear phishing attacks and social engineering attacks.

To further clarify on this, here’s a brief 411 on the above paragraphs:

  • Not everyone inside an ESP needs to access address lists.

    The idea behind this is that email addresses are stored in databases.  When someone opts in, their address is written to a database somewhere on a central server (or servers for redundancy).  But who can access these lists?  Can any old ham-and-egger with limited technical know-how logon to the server, run a select command and dump the outputs to a file?

    In our environment, not just anyone can do this.  Only certain people have access to key servers.  And even in those cases, many of us only have read access.  We cannot dump the outputs to a file and then transfer the contents of that file offsite.  Atkins’ point is that if everyone in the company by default has access to all the servers, then an attacker’s job is easier; they only need to successfully phish anyone in an organization of a couple of hundred employees vs needing to successfully phish someone those few people in the organization with access to the right servers, if access is restricted by default and opened up only those who need it.  This narrows the window of opportunity in which a hacker must succeed, kind of like firing two shots into the exhaust system on the Death Star.

  • Key personnel require training on social engineering attacks.

    Social engineering has been around forever.  Heck, I use it myself in many of my magic performances.  As I said in my presentation at Virus Bulletin in Vancouver in October 2010, the best strategy for combating social engineering attacks is through education.  People who don’t know about these types of scams are more prone to fall for them if certain types of emotions are invoked.  By teaching people about the tricks that hackers use, people can become more resistant to falling for them.

    Thus, ESPs (any company) need proper security training in place to illustrate the risk of these types of attacks.  If employees are aware of the risks it further reduces the odds of an attacker’s success.  Someone is more likely to recognize it as a scam and not take the action of opening the attachment or clicking the link to a malicious site. 

    This is a part of the human element of narrowing the attack surface.

  • Key data must be encrypted.

    The rationale behind this is that even if an attacker steals the data, because it is encrypted it is in a format that is not useful to the attacker.  By the time he breaks it will not be useful because valuable data frequently has a time limit.  Storing stuff in plain text is risky because if you lose it, you have given the thief everything they need to use it to their advantage.

The problem is that even if these security procedures were implemented, it still doesn’t harden an organization against a spear phishing attack.  Software companies need to take part of the blame.  Why do I say this?  Because the RSA attack that just occurred could have implemented all of this and still been hacked.

In the RSA attack, hackers got the random seed and the algorithm that they use to create the random tokens.  They got this by sending phishing mails to a few employees – employees who would have had the proper access – that were caught by spam filters with a subject line that said something along the lines of “Documentation for 2011” along with an attachment.  Employees then went into their junk folders, believed it was a false positive and opened the document.  Mayhem ensued.

These people know about attacks.  The reason they went into the junk folder is because they have experienced times when their spam filters blocked legitimate mail.  They have therefore been subconsciously trained to not trust their spam filter 100% because they have obviously missed legitimate messages in the past.  Whose fault is it that users have been trained to go into their spam filters and retrieve legitimate mail?

Even if they had their antimalware signatures and software patches up-to-date, this malware executed zero-day vulnerabilities and compromised the machine.  The attackers knew what they were looking for and got what they needed.  They did some pre-operational surveillance to target the people they needed to target, get in and get out, and then cover their tracks (this is very similar to what happened to Google in January 2010).  The point is this: whose fault is it that the software they were using contained a zero-day vulnerability?

If an attacker specifically targets someone and goes to the time and effort to customize a zero-day, and knows their way around the inside (or has the technical know how to navigate their way around and create a map quickly), then creating policies to resist these types of attacks is going to put constraints on people.  The reality is that some people need access to data to do their jobs.  Code is written in plain text somewhere that allows people to reverse engineer and steal critical data.  People will sometimes fall for phishes.

If I get you to name any card at any number, I have an advantage over you.  Everything looks perfectly fair, but it’s not.  I know what I’m doing and my odds of success are very high whereas you are merely looking to be entertained.  Every one of us have other things to do during the day and get distracted, protecting against phishing is not high on our list of priorities – we are looking to do our day-to-day jobs.  Attackers who want to get the data have the advantage of surprise and subterfuge.  This advantage is the difference between success and failure.  As Sun Tzu said, “He will win who, prepared himself, waits to take the enemy unprepared.”

I don’t know what the answer is but I know it’s not simple.  Industry needs to come up with a comprehensive guide to securing your IT environment.  People need training.  Data probably should be encrypted.  But it’s not going to solve everything.

In the meantime, think of a number.  Next time we meet up in person, I’ll tell you what it is.