Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
A poster to one of the Joel On Software fora the other day asked what a "salt" was (in the cryptographic sense, not the chemical sense!) and why it's OK to make salts public knowledge. I thought I might talk about that a bit over the next few entries.
But before I do, let me give you all my standard caution about rolling your own cryptographic algorithms and security systems: don't. It is very, very easy to create security systems which are almost but not quite secure. A security system which gives you a false sense of security is worse than no security system at all! This blog posting is for informational purposes only; don't think that after you've read this series, you have enough information to build a secure authentication system!
OK, so suppose you're managing a resource which belongs to someone -- a directory full of files, say. A typical way to ensure that the resource is available only to the authorized users is to implement some authentication and authorization scheme. You first authenticate the entity attempting to access the resource -- you figure out who they are -- and then you check to see whether that entity is authorized to delete the file, or whatever.
A standard trick for authenticating a user is to create a shared secret. If only the authentication system and the individual know the secret then the authentication system can verify the identity of the user by asking for the secret.
But before I go on, I want to talk a bit about the phrase "security through obscurity" in the context of shared secrets. We usually think of "security through obscurity" as badness. A statistician friend of mine once asked me why security systems that depend on passwords or private keys remaining secret are not examples of bad "security through obscurity".
By "security through obscurity" we mean that the system remains secure only if the implementation details of how the security system itself works are not known to attackers. Systems are seldom obscure enough of themselves to provide any real security; given enough time and effort, the details of the system can be deduced. Lack of source code, clever obfuscators, software that detects when it is being debugged, all of these things make algorithms more obscure, but none of these things will withstand a determined attacker with lots of time and resources. A login algorithm with a "back door" compiled into it is an example of security through obscurity; eventually someone will debug through the code and notice the backdoor algorithm, at which point the system is compromised.
A strong authentication system should be resistant to attack even if all of its implementation details are widely known. The time and resources required to crack the system should be provably well in excess of the value of the resource being protected.
To put it another way, the weakest point in a security system which works by keeping secrets should be the guy keeping the secret, not the implementation details of the system. Good security systems should be so hard to crack that it is easier for an attacker to break into your house and install spy cameras that watch you type than to deduce your password by attacking the system, even if all the algorithms that check the password are widely known. Good security systems let you leverage a highly secured secret into a highly secured resource.
One might think that ideally we'd want it both ways: a strong security system with unknown implementation details. There are arguments on both sides; on the one hand, security plus obscurity seems like it ought to make it especially hard on the attackers. On the other hand, the more smart, non-hostile people who look at a security system, the more likely that flaws in the system can be found and corrected. It can be a tough call.
Now that we've got that out of the way, back to our scenario. We want to design a security system that authenticates users based on a shared secret. Over the next few entries we'll look at five different ways to implement such a system, and what the pros and cons are of each.
A recent question I got about the .NET CLR's hashing algorithm for strings is apropos of our discussion
> "There's nothing to stop you flipping a couple of bits in the input to one of the hashes: it takes you no time at all, and it provably doesn't weaken the security..."
From a cryptographic perspective, "flipping a couple of bits" in the initialization values of of an encryption scheme can indeed weaken it severely. Many such studies have been done to prove it. Those values may look random, but they aren't.
>> A security system which gives you a false sense of security is worse than no security system at all
This is utterly wrong. A little hole in a security system is so much better than no security at all.
You seem to have misunderstood my statement, since you have re-stated it completely differently. A security system which gives you a false sense of security because it is broken is much worse than having no security system. In both cases, you lack security against threats -- precisely those threats that the security system is designed to mitigate. In the one case, you do not know that, and are more likely to increase your vulnerability. In the other, you do know that and are likely to take steps to mitigate vulnerability.
I'm absolutely not making the statement "a security system that protects against some threats better than others is worse than no security system at all", which seems to be how you are interpreting "a security system which gives you a false sense of security is worse than no security system at all." Every security system mitigates some threats better than others; the question at hand is the value of a security system that secretly fails to mitigate the threats it is designed to mitigate.
If you are going to anonymously criticize me then you could at least choose to criticize what I'm actually saying rather than attacking some crazy straw man that bears little resemblance to anything I said. -- Eric
Do you have snipers on the roof ? A no-fly zone above your house ? It's so insecure! Yet I guess you still keep the door locked, even if it does not guarantee 100% security.
See, there you go. You've completely and utterly misinterpreted what I said. Sure, I lock my door. I do not have a false sense of security as a result of doing so. I am very clear as to what vulnerabilities and threats that part of my home security system mitigates, so the sense of security I get from it is in no way false.
If I had been oversold -- if the lock vendor had convinced me incorrectly that the lock kept my house secure against ninja army attack -- then that really would be worse than not having a security system because being misinformed as to my actual security, I might be overconfident in exposing myself to actual attacks. I might decide to keep uninsured gold bricks in my house, for instance, incorrectly believing them to be safe and thereby massively increasing both the risk and the cost of a successful ninja army attack. That is what a false sense of security does, and why it is undesirable.
But since I am very clear as to what the shortcomings of my door lock security system are, and very clear on what attack costs I am attempting to minimize, I know how to allocate my resources appropriately. (Amongst, say, additional security devices, insurance against successful attacks, moving valuable resources to more secure sites, and so on.) -- Eric