Shawn has some great blog entries on how to create restricted (or "sandboxed") AppDomains in the CLR by setting up custom AppDomain policy. Perhaps not surprisingly, this is one of the techniques used by Visual Studio Tools for Office to ensure that untrusted code doesn't run inside an Office solution. (And, for the curious, here's another technique we had to use to let some things slip through, although it appears that now the underlying problem has been fixed so we should be pulling it out of VSTO 2005)
Anyway, there are two other key things that you really should to do to help isolate the untrusted code in your AppDomain:
1) Set a different "app base" for the AppDomain to avoid assembly leakage
2) Set top-of-stack evidence for the AppDomain to protect against luring attacks
As you may know, the CLR basically looks in two places for assemblies:
·If the assembly is strong-named, it looks in the GAC
·If the assembly is not strong-named or it was not found in the GAC, it looks in the application's directory
(It's a bit more complicated than that, but you can read about the full details here).
So let's say you have your application MyApp.exe and an untrusted add-in, AddIn.dll, in the same directory on your local machine. Using Shawn's techniques, you set up a new AppDomain to load the add-in and you set policy on that domain so that AddIn.dll only gets Internet Zone permissions. All the while code from AddIn.dll is running inside the second AppDomain, it is prevented from doing "bad things" by the policy set on that domain. But what if the add-in can somehow cause one of its types to "leak" back out into the main domain?
For example, the add-in might define a custom exception type and then throw it back to the application. When the application deserialises the exception object, the type will get loaded back into the main AppDomain... with FullTrust! (Because the assembly is on the local disc and the default policy in effect in the main AppDomain will still grant it unrestricted access to the machine).
The way to get around this is to force your add-ins to be in a separate directory (possibly, but not necessarily, a subdirectory of your application's main directory) and then to create your partially-trusted AppDomain with an AppDomainSetup that points to this directory. Then the CLR can load the add-in's assembly into the add-in AppDomain, but any attempt to "leak" types back to the main domain will fail because fusion will not be able to find the assembly. This does mean that any assemblies that must be shared between the two AppDomains (eg, an interface assembly that defines both the add-in and the host application's members) must either be in the GAC or in both directories so that fusion can load it on both sides of the AppDomain boundary, but that isn't too hard to do.
The second mitigation is to stop an (unlikely) attack whereby fully-trusted code is loaded into the AppDomain and is tricked into performing some operation on behalf of the malicious add-in, but where the add-in itself is not on the stack. This might be tricky to do under "normal" circumstances (which is why I say it is unlikely, although you may be able to do it if the trusted code runs in a separate thread), but one easy way to do it would be to register a callback object with the host application, where the callback is implemented by a trusted component. (Doing this with the normal delegate-based eventing system should fail, but a roll-your-own callback mechanism might not be so well protected).
What happens in this case is that the host application calls some dangerous method Foo that is implemented by the fully-trusted code in the add-in's AppDomain. The trusted code attempts to perform the dangerous action -- say, deleting a file off disk -- which of course initiates a stack walk. But the security system only sees the trusted code, the trusted AppDomain boundary, and the trusted host on the stack, so it lets the operation succeed.
By creating the AppDomain itself with evidence that will only grant restricted permissions, such attacks can be mitigated because when the security system performs the stack walk it will see the trusted code, and then the untrusted AppDomain boundary, and fail the operation immediately.
It's easy to setup an AppDomain this way:
RunAddIn(object sender, EventArgs e)
// Create AppDomainSetup with a different AppBase
AppDomainSetup ads = new AppDomainSetup();
ads.ApplicationBase = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "addin");
// Create Evidence for top-of-stack
Evidence ev = new Evidence();
// Create the AppDomain with the setup and evidence objects
AppDomain ad = AppDomain.CreateDomain("AddIn Domain", ev, ads);
// Setup AppDomain policy per
's posts, but do the setup
// in the remote domain (for obscure reasons -- I'll probably
// blog my own sample solution for this soon if
doesn't ;-) )
SetupHelper helper = ad.CreateInstanceAndUnwrap("SetupHelper", "SetupHelper.SetupHelper") as SetupHelper;
// Create the add-in and do something with it
IAddIn addIn = ad.CreateInstanceAndUnwrap("AddIn", "AddIn. AddIn") as IAddIn;
// We're now better protected if the exception is malicious
MessageBox.Show(ex.ToString(), "Main app's exception handler");
[Edited a typo in my code]