You may have heard a thing or two last week about a little project we like to call Silverlight, including a small version of the CLR that will run in the browser on both Windows and the Mac. (If you haven't grabbed the Silverlight v1.1 alpha bits yet, I highly recommend it -- as well as grabbing the SDK and heading over to the quickstarts site and forums so that you can try it out for yourself).
Since the v1.1 release of Silverlight includes a slimmed down version of the CLR, you might be wondering what the managed security story for Silverlight is and how it compares to CAS on the desktop version of the CLR.
The good news for everyone who's spent hours deciphering cryptic caspol commands is that Silverlight removes CAS entirely. Instead, the security model is based around an enhanced version of the v2.0 transparency model. When I say CAS is gone, I mean it's really gone -- there are no permissions, no policy levels, and no stack walks in CoreCLR. The System.Security namespace is pretty barren indeed! (But wait -- why is SecurityPermission still exposed from mscorlib then? It turns out that the C# compiler will emit a RequestMinimum for SkipVerification if your assembly contains unverifiable code. Since we need to be able to use the existing C# compiler to create Silverlight assemblies, we had to keep that one permission in the public surface area).
In place of CAS, the CoreCLR security model can be boiled down to the following two statements:
Which means that the essence of the CoreCLR security model is that Silverlight applications may not contain unverifiable code and may only call transparent or safe critical APIs exposed from the platform. Let's dig a little deeper into this.
Since transparency forms the, ahem, core of the CoreCLR security model, let's take a minute to refresh the basics of what transparency means, starting with the model we already know on the v2.0 framework. Transparent code is code which cannot perform any actions that would elevate the permissions of the call stack. Essentially, security transparent code cannot cause any security check to succeed (although it can cause them to fail); so you can think of it as running with the permissions of its caller. The opposite of transparent code is critical code, and assemblies may contain a combination of transparent and critical code. Individual methods may only be transparent or critical however; they cannot contain a mix of both.
The specific restrictions placed upon security transparent code is that it may not:
So let's start to translate this to the Silverlight world. We know that there's no such thing as CAS in Silverlight, so the first two restrictions don't make sense there. (Without CAS there are no permissions to assert or do a demand for, even just a LinkDemand). On the desktop CLR, the next two restrictions are enforced by injecting a demand for UnmanagedCode. Since we don't have a concept of demands in Silverlight, that enforcement mechanism isn't appropriate, and instead we just throw a MethodAccessException if transparent code tries to violate one of these rules.
The last rule is the most interesting in the group. If transparent code attempts to directly call critical code, a MethodAccessException will be thrown. However, most of the interesting system services are required to be implemented in critical code (for instance, in order to access the file system, we need to P/Invoke to the operating system's file IO APIs. Since calling native code is only available to critical code, any APIs which write to the file system must therefore be critical). If that's the case, how does a Silverlight application access any of these interesting services?
This is accomplished by having an intermediate layer of safe critical code that act as gate keepers for the critical methods, ensuring that it is safe for the transparent code to perform the operation that it is asking to do. These safe critical APIs may do various checks before passing control to a critical API, including validating incoming parameters and ensuring that the application state is acceptable for this call to continue. Since safe critical methods are in fact critical code, once they ensure that the caller is allowed to proceed they are allowed to invoke a critical method on the caller's behalf.
For example, earlier I mentioned that file IO must be implemented as critical code. Being able to have some form of persistent storage is useful for an application however, so we'll need a safe critical layer that transparent code can call, and after ensuring that the call is valid will pass on requests to the critical file IO layer. In Silverlight, this safe critical layer is IsolatedStorage. When a Silverlight application calls IsolatedStorage, the IsolatedStorage APIs will validate the request by making sure that the Silverlight application is requesting a valid file and is not over its quota. Then it calls the critical file APIs to perform the actual work of reading or writing to disk.
You can think of this as being very similar to an operating system model. In that analogy:
In the same way that applications running on Windows or the Mac cannot call directly into the kernel of the operating system without passing through a system call layer, Silverlight applications cannot directly call critical code without passing through a safe critical layer. In the file IO example, a Silverlight application may not directly write to disk without first passing through the IsolatedStorage layer.
That's a lot of information -- but thankfully it can be summed up very easily: The CoreCLR security model in Silverlight is that all Silverlight applications consist of entirely security transparent code, and this security transparent code may only call other transparent code or safe critical code.
Over the next few posts, we'll explore a few more details of the security model, such as how to tell what code is transparent and critical and some new rules regarding inheritance.