You Want Salt With That? Part One: Security vs Obscurity

You Want Salt With That? Part One: Security vs Obscurity

  • Comments 10

A poster to one of the Joel On Software fora the other day asked what a "salt" was (in the cryptographic sense, not the chemical sense!) and why it's OK to make salts public knowledge. I thought I might talk about that a bit over the next few entries.

But before I do, let me give you all my standard caution about rolling your own cryptographic algorithms and security systems: don't.  It is very, very easy to create security systems which are almost but not quite secure. A security system which gives you a false sense of security is worse than no security system at all! This blog posting is for informational purposes only; don't think that after you've read this series, you have enough information to build a secure authentication system!

OK, so suppose you're managing a resource which belongs to someone -- a directory full of files, say.  A typical way to ensure that the resource is available only to the authorized users is to implement some authentication and authorization scheme.  You first authenticate the entity attempting to access the resource -- you figure out who they are -- and then you check to see whether that entity is authorized to delete the file, or whatever.

A standard trick for authenticating a user is to create a shared secret. If only the authentication system and the individual know the secret then the authentication system can verify the identity of the user by asking for the secret.

But before I go on, I want to talk a bit about the phrase "security through obscurity" in the context of shared secrets. We usually think of "security through obscurity" as badness. A statistician friend of mine once asked me why security systems that depend on passwords or private keys remaining secret are not examples of bad "security through obscurity".

By "security through obscurity" we mean that the system remains secure only if the implementation details of how the security system itself works are not known to attackers. Systems are seldom obscure enough of themselves to provide any real security; given enough time and effort, the details of the system can be deduced. Lack of source code, clever obfuscators, software that detects when it is being debugged, all of these things make algorithms more obscure, but none of these things will withstand a determined attacker with lots of time and resources. A login algorithm with a "back door" compiled into it is an example of security through obscurity; eventually someone will debug through the code and notice the backdoor algorithm, at which point the system is compromised.

A strong authentication system should be resistant to attack even if all of its implementation details are widely known. The time and resources required to crack the system should be provably well in excess of the value of the resource being protected.

To put it another way, the weakest point in a security system which works by keeping secrets should be the guy keeping the secret, not the implementation details of the system. Good security systems should be so hard to crack that it is easier for an attacker to break into your house and install spy cameras that watch you type than to deduce your password by attacking the system, even if all the algorithms that check the password are widely known. Good security systems let you leverage a highly secured secret into a highly secured resource.

One might think that ideally we'd want it both ways: a strong security system with unknown implementation details. There are arguments on both sides; on the one hand, security plus obscurity seems like it ought to make it especially hard on the attackers. On the other hand, the more smart, non-hostile people who look at a security system, the more likely that flaws in the system can be found and corrected. It can be a tough call.

Now that we've got that out of the way, back to our scenario. We want to design a security system that authenticates users based on a shared secret. Over the next few entries we'll look at five different ways to implement such a system, and what the pros and cons are of each.

  • There's nothing like adding a little bit of obscurity to a known-secure scheme. Suppose, for instance, that you're using SSL in a system where both ends of the communications link are under your control. Then there's nothing to stop you flipping a couple of bits in the input to one of the hashes: it takes you no time at all, and it provably doesn't weaken the security; but it'll confuse the hell of someone trying to reverse-engineer your protocol. The more human time you can waste, without wasting computer time or compromising security, the better.
  • Off-topic, but still related (security after all). I've created an ActiveX that downloads data from a web server, using async monikers et al. I understand that in order to support download from sites that require authentication, I need to implement IAuthenticate. Thing is, the download will most likely occur from the same site that hosted the ActiveX itself, and the containing page. I want to avoid requiring the user input the same credentials twice. Anyone know how the ActiveX can get the credentials from IE? I haven't been able to find any information about this in the MSDN, Google, etc.
  • Bruce Schneier is fond of saying that "anyone can design a cryptosystem that he himself cannot break". This is always a bad idea. Even the best in the world get it wrong sometimes; there was (is?) a subtle man-in-the-middle vulnerability in IPSEC, related to XAUTH - search the IETF working group mailing list for about 2-3 years ago for details.

    Related to obscurity: it's interesting to note that the US Government believes in security through obscurity, in the cryptographic sense: they almost never publish the design docs to released algorithms (e.g. SHA/SHA-1), they rarely publish their own cryptanalytic findings (e.g. whatever caused the NSA to release SHA-1, or more famously, their obvious discovery of differential cryptanalysis 25ish years before the academic community), and sometimes algorithms themselves are kept secret (e.g. Skipjack, which was only released in tamper-proof chips). The US Armed Forces has been using secret stream ciphers for workhorses for years.

    I gotta admit, all of the above strike me as dubious strategy at best. Then again, the NSA has more crypto people than the rest of the world combined, so I guess their stuff is still peer-reviewed, albeit not in open literature.

    This series sounds like fun. :-)
  • "Anyone know how the ActiveX can get the credentials from IE?"

    My guess? Implement IObjectWithSite::SetSite for an IUnknown back to IE. There should be some way to retrieve or listen (OnNavigate?) for the authentication info.
  • Fabulous Adventures in Coding assays this week a, well, fabulous adventure, in simple cryptography. I know enough to get myself in deep trouble with this subject, but Eric has put together three short and knowledgeable posts that begin easy and...
  • It still seems that 'knowing the salt tells you nothing' depends on what else you know about the salt. If you just know the salt is used in passwords on a server, that's very little help. If you know a salt is used for a particular password then a dictionary attack is more possible, you just have to try possible salt + password combinations. If you know the salt for a password and how it's combined with the password, then you can do a basic dictionary attack right?
  • A recent question I got about the .NET CLR's hashing algorithm for strings is apropos of our discussion

  • > "There's nothing to stop you flipping a couple of bits in the input to one of the hashes: it takes you no time at all, and it provably doesn't weaken the security..."

    From a cryptographic perspective, "flipping a couple of bits" in the initialization values of of an encryption scheme can indeed weaken it severely.  Many such studies have been done to prove it.  Those values may look random, but they aren't.

  • >> A security system which gives you a false sense of security is worse than no security system at all

    This is utterly wrong. A little hole in a security system is so much better than no security at all.

    You seem to have misunderstood my statement, since you have re-stated it completely differently. A security system which gives you a false sense of security because it is broken is much worse than having no security system. In both cases, you lack security against threats -- precisely those threats that the security system is designed to mitigate. In the one case, you do not know that, and are more likely to increase your vulnerability. In the other, you do know that and are likely to take steps to mitigate vulnerability.

    I'm absolutely not making the statement "a security system that protects against some threats better than others is worse than no security system at all", which seems to be how you are interpreting "a security system which gives you a false sense of security is worse than no security system at all." Every security system mitigates some threats better than others; the question at hand is the value of a security system that secretly fails to mitigate the threats it is designed to mitigate. 

    If you are going to anonymously criticize me then you could at least choose to criticize what I'm actually saying rather than attacking some crazy straw man that bears little resemblance to anything I said. -- Eric

    Do you have snipers on the roof ? A no-fly zone above your house ? It's so insecure! Yet I guess you still keep the door locked, even if it does not guarantee 100% security.

    See, there you go. You've completely and utterly misinterpreted what I said. Sure, I lock my door. I do not have a false sense of security as a result of doing so. I am very clear as to what vulnerabilities and threats that part of my home security system mitigates, so the sense of security I get from it is in no way false.

    If I had been oversold -- if the lock vendor had convinced me incorrectly that the lock kept my house secure against ninja army attack -- then that really would be worse than not having a security system because being misinformed as to my actual security, I might be overconfident in exposing myself to actual attacks. I might decide to keep uninsured gold bricks in my house, for instance, incorrectly believing them to be safe and thereby massively increasing both the risk and the cost of a successful ninja army attack. That is what a false sense of security does, and why it is undesirable.

    But since I am very clear as to what the shortcomings of my door lock security system are, and very clear on what attack costs I am attempting to minimize, I know how to allocate my resources appropriately. (Amongst, say, additional security devices, insurance against successful attacks, moving valuable resources to more secure sites, and so on.) -- Eric

Page 1 of 1 (10 items)