The horse has already bolted

The horse has already bolted

  • Comments 13

 

A while ago (yes, I'm very slow) Ivan had a couple of blogs about removing the ability to turn off CAS in the runtime (here and here). Whilst I am sure the CLR team has some good reasons for doing so, many of the comments on the entries exhibit a common misunderstanding about security (well, about lots of things really). The idea is that if you shut off one avenue for the "bad guys" to get their dirty work done, then you will have made the world a better place. Sounds like a reasonable idea, right? Well it would be, except for two small problems:

1)       The thing you are shutting off is inaccessible to the bad guy unless he already owns your machine; and

2)       Shutting off one way doesn't help when there are many other avenues to choose from

The suggestion is to remove the ability to use the command-line:

caspol -s off

or to make the equivalent API call:

System.Security.SecurityManager.SecurityEnabled = false

and to possibly replace it with a registry key instead.

Most of the user comments want the ability to turn security off completely removed so that malicious code can't run rampant on your system, but the problem is that if a hacker can already run caspol -s off on your machine then two things are true:

1)       They're running arbitrary code on your machine

2)       They're running as a highly privileged user (Administrator) on your machine

Both of these things mean that the hacker has already won! Who cares if he can turn security off or not; he's already 0wnz0r3d j00. Let's say that the caspol switch was removed; there are at least three other things the hacker could do to achieve the same goals (assuming, of course, that the code already running rampant on your system isn't already doing all the damage):

1)       Replace / patch the CLR itself to bypass all security checks. You probably only have to tweak a couple of bytes in the right CLR binary to completely bypass all security checks; anyone running arbitrary code as Administrator can replace any system file on your machine with a patched / hacked / back-doored version

2)       Update policy with a blanket rule to trust all code from every location with FullTrust. They can't turn security off any more, but they sure as heck can cause the security system to grant all code FullTrust:

caspol -en -cg 1 -levelfinal on

(DO NOT enter that command line into your system! It will let all code - even code from the internet - have full control over your system)

3)       Avoid using managed code! Well duh! If the CLR security system is going to enforce security policy, then just run native code instead :-) Even script code (VBScript / JScript) running under WSH has FullTrust on any normal system

The problem is that you're trying to shut the barn door after the horse has already bolted. It's like asking your locksmith to remove the ability to unlock your front door from the inside --once the burglar is inside, she doesn't need to unlock the door anymore! All it does is make life harder for you (the home owner) since you now have to climb out the window instead of going out the door, and it doesn't really stop the bad guys from getting in anyways (you left the window open on your way out...)

Does it make sense to reduce the discoverability of the feature so that curious or naive users don't accidentally make their machines insecure? That's quite possibly a good idea. An average user probably shouldn't be able to turn off security by simply copying text from a web page or e-mail message into the Start -> Run dialog, because it might be easy to socially engineer them into doing so (or they might forget or not know how to turn it back on later).

But don't fool yourself into thinking that it improves the real security of the system. The horse has already bolted.

  • I would argue that there is also a 3rd problem with that philosophy about security, a secondary issue that people often forget about because they are simply shutting down "exploitable" functionality and not looking at the overall model.

    The castle or fortified house model is an excellent one for conveying the roles of administrators, attackers, and tools available to them. As you say, if someone is inside, they already have an endlesss array of capabilities - they've won! Removing usable tools within your protected system won't change that fact. It DOES have an effect on administrators though: if the tool is disabled, they cannot use it in defense of the system.

    The key alsways goes back to classic defense strategy: make the perimeter secure, watch it like a hawk, and even provide defense in-depth; but welding your gun cabinet shut is probably not a good approach.
  • You are totally right that removing caspol -s off isn't going to protect against a malicious hacker who already has sufficient privs to execute the command, but I think the bigger issue is in the end user naively executing the command. I have seen too many newsgroup posts whose "fix" for SecurityExceptions is to turn security off.

    Removing caspol -s off is helping to protect against a social engineering attack where a naive user is lured into executing the command, and the compromised.
  • Andrew - totally agreed that it makes sense to protect people from themselves. It's much easier than fooling someone into lowering their IE security settings because you can cut+paste into the "Run" dialog... it takes a bunch of manual steps to change IE settings.
  • You are of course right that security is not just turning things on and off. However, I don't think people who commented misunderstands security. People are complaining because of Microsoft's history. There still are way too many undocumented "features" that lead security holes in Windows.
    Ivan's entry smelled at least for me that may end up with "they did it again!". Making halls invisible is bad no matter what. Having other avenues is just another problem. For now, even when CAS is turned off (by accident or not does not matter here) there are no way other than peeking into registry to know things went wrong about CAS. That is of course the problem and needs to be more flushingly visible.
  • I hold to my original comments to Ivan. Because you can programatically turn off the security of the framework, your expose yourself to unnecessary risk. I see no real benefit to providing the option... and only hear about its use (-off) when diagnostics occur. The weakest link is the human factor when it comes to security, and when possible you simply should not provide options like this that are not needed by the end user.

    The only thing that would give me reservation is if someone can show me more practical uses for the -off option. However, just because its a "security feature" doesn't make it a secure one. I think we need a big ass poster that just says:

    Security Feature != Secure Feature

    Quite frankly the recommendation of using a registry setting is ridiculious. It provides no added security past the fact you can throw a DACL on it... which would typically end up using the same group as the user needed to run caspol.

    The functionality for the original feature quite frankly should never have been part of the final product... and one of the first things caught during threat analysis. Funny thing is I remember seeing a DREAD priority check list on campus one time that matched a STRIDE threat that addressed such features. I really wish I could remember who did that to see what their findings were during a threat modelling session.

    There is no benefit to having this feature exposed to the user... yet there are unneccessary risks. The weakness in human nature/end user can always trump security... removing the ability simply removes the potential attack vector completely.

    Yes the horse bolted... but if you never wanted him to leave the barn why did you build a door that lets anyone open it in the first place? (Retorical question.. I just know I am going to get in trouble with that analogy) :)
  • This the the "it's not 100% effective, so not worth doing" argument, as well as the blind spot of assuming the frontier will never be breached (therefore why bother to manage internal risk surfaces).

    If you are dealing with a live human, then the first is true; parallel ways in will be exploited, depending only on whether the attacker knows about them. If you are a business site worth the attention of life hackers, this is an importand consideration.

    For the rest of us, what we face are self-contained automated attack bots that have their intrusion logic hardcoded. Block the channel they use, and they fail, unless programmed to anticipate this and work around.

    What's possible is often way more than what is done, especially by the kiddies most likely to do capricious damage to prove their muscle. Sometimes a malware that has got all the way in through exploits etc. is foiled by something as simple as a non-default OS path.

    So, while nothing is 100% effective, anything partially effective is worthwhile as long as it doesn't render other strategies less effective or have downsides that outweigh the benefits.

  • Jumping in a bit late here, but what the heck…

    I'm a little surprised that neither you, Ivan, nor any of the commenters so far have really delved into the developer perspective on disabled CAS environments. As a developer, I feel that the horse gets a good running jump the moment I release code to execute in an environment that is under someone else's control. That someone might be a perfectly legitimate administrator of the machine on which the code is running. However, just because he is already in full control of the machine and able to run arbitrary code doesn't necessarily mean that he should be able to do whatever he wishes with my code.

    A developer might have several reasons for wishing to ensure that his code cannot run in a disabled-CAS environment. These might include both protection of one's client (e.g.: don't allow particularly "sensitive" operations to be hijacked by malicious attackers or even legitimate users) as well of one's own intellectual property and work (e.g.: don't allow a client to write code against one's private APIs).

    In any case, it doesn't matter in the least to me what mechanisms are available for disabling CAS. As long as _any_ mechanism persists for disabling CAS in an untampered CLR, I'll be worrying about how to prevent my code from executing in a disabled CAS environment, not about how easy or difficult it might be to disable CAS in the first place.

    At the moment, the only mechanism for preventing such execution seems to be detecting whether CAS is enabled and responding appropriately, usually by throwing an exception if it's not. Aside from being a bit of a PITA, this approach is prone to errors: it's just too easy to forget to add the check in one of the many places it might be required in any given assembly. I'd much prefer to see an attribute-based alternative along the following lines:

    1. Add an attribute that can be used at the assembly, class, or member level to declaratively prevent execution in a disabled CAS environment.

    2. Allow "downward" propagation of the setting from assembly to classes to members unless explicitly overridden. For obvious reasons, overrides would only work if the higher level setting is to permit execution and the override is to prevent it.

    3. Do not allow this attribute to be overridden by anything in the runtime environment. i.e.: There's nothing an administrator should be able to do to allow my code to run in a disabled CAS environment if the developer has used the attribute to specify that this should not be permitted.

    In an ideal world, I'd also like to see the default behaviour be that code not execute with CAS disabled, thereby preventing folks who aren't even aware of the attribute from shipping code that will run when CAS is disabled, but that horse is probably long gone as well.

    As a bit of a side note (and the reason I responded to you rather than Ivan), from the developer perspective, disabling CAS isn't quite the same as granting full trust to all code or tampering the CLR. The full trust grant won't grant either inappropriate strong name identities or permissions that the assembly explicitly refuses. Tampering the CLR means changing the platform on which one's code will run, and the code cannot be expected to behave normally under these circumstances. While I'd certainly like to see a nice high bar on tampering of CLR, I try not to lose much sleep worrying about my clients' possible attempts to bypass my intellectual property protections by hacking the CLR itself. Then again, I do hope that those particular sheep keep a few Microsoft employees awake at night… <g>
  • In response to Nicole's comments ... You're right that disabled CAS environment is not very secure, and the CLR team has been looking at some ways to address this situation, however I feel that your concerns won't be addressed by simply disallowing CAS to be turned off.

    Almost all of your concerns revolve around the assumption that the default CAS policy is still in effect. However, this may not, and quite possibly is not still the case. Even if the default policy is in effect, there's nothing stopping FullTrust code that runs on your local computer from accessing your private members. Even if your assembly was not granted FullTrust, it's very easy for me to whip up an assembly that lives on the local machine and assert some permissions, then reflect over your code.

    Basically, once you've supplied your code physically to an admin to put on their machine, its game over for you. If you need to defend your IP, you must host on a machine you control, and allow access through another mechanism.
  • Shawn: Full trust does not automatically confer strong name identity permissions if an assembly is not appropriately signed. Aside from the delay signing and verification skipping issue (another pet peeve of mine <g>), use of strong name identity permissions would go a long way toward helping to prevent misuse of low accessibility members if CAS could not be disabled. As things stand now, the best one can is to protect members other than fields by imposing the following:

    1. Demand/LinkDemand a suitable StrongNameIdentityPermission at the class or member level, as appropriate.
    2. Within every "sensitive" member, regardless of its accessibility:
    a. Verify that CAS is enabled.
    b. Verify that the callers that are subject to the StrongNameIdentityPermission demand are not merely delay-signed.

    #2 is a great, big PITA.

    As for fields, one can either hope that one's potential attackers are ignorant or lazy, or one can acknowledge that it's essentially impossible to maintain a true input boundary in any assembly or class written for the current .NET Framework and, therefore, treat even private fields as if they come from external, untrusted sources. The first approach is rather naive, and the second can lead to some very big performance problems (as well as yet more nasty, boring work that is very prone to error-by-omission).

    You're correct in stating that preventing disabling of CAS checks wouldn't address all my concerns, but blunting one corner of the "evil triangle" would be a start. I'd also love to see similar declarative mechanisms for prevention of invocation beyond declared accessibility or by delay-signed assemblies. However, being able to more easily prevent any of the three would offer some relief for those of us who need to raise the bar at least a bit against possible attack by legitimate users and administrators of our applications and the machines upon which they run. If I had to pick just one, it would be the accessibility issue, but disabling CAS was the topic on hand.
  • Nicole, you are trying to win an unwinnable battle here. If a malicious user has complete control of the system, then they can do anything they want. It isn't your system any more.

    The best the CLR can do is protect your code from within the confines of the managed CLR environment itself. As soon as you go outside of the managed environment, it's game over. For example, the hacker just uses unmanaged code to twiddle the in-memory bits that represent your StrongNameLinkDemand so that it will succeed.

    Or, they just ILDASM your code, remove all the securiyt checks, and ILASM it back again.

    In fact, if the attacker has full control of the system, what are you trying to "protect"? THEY DON'T EVEN NEED YOUR SOURCE CODE ANY MORE!

    What does it mean to "misuse low accessibiliy members" if the attacker is running with FullTrust? There's nothing your code can do that they can't do themselves.

    I don't understand the kinds of scenarios you are presenting. As Shawn says, the ONLY way to protect against this is hardware (physical) isolation -- expose a web service ON A MACHINE YOU CONTROL and stop worrying about attacks you cannot hope to stop ;-)
  • I'm well aware of the various attack approaches available from within both managed and unmanaged code, and I'm not suggesting that preventing CAS from being disabled is somehow going to render managed code unhackable. However, allowing CAS to be disabled does make it easier to perform some types of attacks. There is a big difference in the levels of both expertise and effort required to scrape or modify in-memory bits vs simply using documented and supported reflection methods. The latter approach is trivially scriptable, and one might even be able to call PSS for help if one runs into trouble. <g>

    As for the physical isolation issue, it does not help when a legitimate administrator of the machine might be considered a potential attacker. In such scenarios, legal protections are usually the final line of defense, but this does not mean that it should not be as difficult as possible for an administrator to abuse the system in the first place.

    Please note that my main complaint here is wrt the level of effort and difficulty, not that a hack is possible at all. Allowing CAS to be disabled makes it more difficult to secure an application but easier to hack it. That's just plain ridiculous.

    There's also the side issue concerning use of StrongNameIdentityPermission. Flipping the scenario request around, can you think of any case in which it would be worthwhile to implement strong name checks (including the added design and testing effort) despite the fact that CAS can be disabled? The entire purpose of the StrongNameIdentityPermission is to allow the developer of a component to block callers that even the administrator of a machine may otherwise trust. If that purpose can be so easily circumvented _by design_, then the permission probably shouldn't exist in the first place.
  • If the administrator can't be trusted, fire them.

    Maybe I need to write a blog about basic security priniples...

    Until then, everyone should read this:

    http://www.microsoft.com/technet/archive/community/columns/security/essays/10imlaws.mspx

    and this:

    http://www.microsoft.com/technet/archive/community/columns/security/essays/10salaws.mspx
  • You've already lost the game.
Page 1 of 1 (13 items)