Freigeben über


Transparency 101: Basic Transparency Rules

One of the biggest changes in the .NET 4 security model is a move toward security transparency as a primary security enforcement mechanism of the platform. As you'll recall, we introduced security transparency in the v2 release of .NET as more of an audit mechanism in order to help make the surface area of APTCA libraries as safe as possible. In Silverlight, we evolved transparency into the security model that the entire managed platform was built on top of.  With .NET 4 we continue that evolution, making security transparency now the consistent way to enforce security both on Silverlight and on the desktop CLR.

Before we dive deep into what all this means, let's take a quick refresher over the basic concepts of transparency.

The fundamental idea of security transparency is to separate code which may potentially do dangerous or security sensitive things from code which is benign from a security perspective. The security sensitive code is called security critical, and the code which does not perform security sensitive operations is called security transparent.

With that in mind, let's figure out what operations are security sensitive, and therefore require the code performing them to be security critical.

Imagine for a minute that the CLR shipped exactly as-is, but without the ability to do two important operations:

  • Call native code, either via COM Interop or P/Invoke.
  • Execute unverifiable code

Without either of these operations, all the possible code that could run on the CLR would be entirely safe - there's no possible thing that it could do that could be dangerous. On the flip side, there's also not very much interesting it could do (taking into account that the BCL is managed code, and would have to abide by these rules as well).

For example, you could write a calculator application or an XML parser library with the operations available to you in verifiable IL, however the utility of that code would be severely limited by the fact that you could not receive any input from the user of your application (which would require either your app itself or the BCL interop with native code in order to read from a file or standard input); similarly you couldn't display the results of your calculations without talking to native code either.

Obviously the CLR wouldn't be a very interesting platform for writing code on if these restrictions were in place, so we need to make them available. However, since they both allow taking full control of the process, we need to restrict them to trusted code only. Therefore, calling native code and having unverifiable code are our first set of operations that are security critical.

(Note that containing unverifiable code and calling native code are the operations here - there's no inherent problem with calling an unverifiable method and the fact that a method contains unverifiable code does not in and of itself mean that it is dangerous to use).

We've now determined that code needs to be security critical in order to work with native code or unverifiable code - easy enough; this gives us our first set of security critical methods. However, since these methods are performing security sensitive operations using them may also be a security sensitive operation. That leads us to our third transparency rule - you must be critical if you:

  • Call critical code

Some code, such as the File classes, are security sensitive but mitigate their security risk by demanding permission to use them. In the case of the File classes, if the sandbox they are running in is granted the appropriate FileIOPermission then they are safe to use; otherwise they are not.

If trusted code wants to use the File classes in a sandbox that does not support them, it can assert away the file IO demands. For instance, IsolatedStorage does exactly this to allow access to a safe isolated storage file store in sandboxes that do not allow unrestricted access to the user's hard drive.

By doing this, however, the trusted code has removed the mitigation that the original security critical code put in place - the permission demand - and asserted that the demand is not necessary anymore for some reason. (In the case of isolated storage because the file paths are well controlled, a quota is being enforced, and an IsolatedStoragePermission demand will be issued).

Since permission asserts remove security checks, performing an assert is security sensitive.  This means we've now got the fourth operation which requires code to be security critical:

  • Perform a security assert

Some code which performs a security sensitive operation will protect itself with a LinkDemand, which rather than requiring that it only run in a specific sandbox instead says that the operation is viable in any sandbox - as long as the code executing the operation is trusted. For example, the Marshal class falls into this category.

Marshaling data back and forth between native and managed code makes sense in every sandbox - it's a generally useful operation. However, you certainly don't want the sandboxed code using methods like ReadByte and WriteByte to start manipulating memory. Therefore, the Marshal class protects itself with a LinkDemand for a full trust equivalent permission.

Since this LinkDemand is Marshal's way of calling out that any use of these methods are security sensitive, our fifth transparency rule is easily derived. Code must be security critical if it attempts to:

  • Satisfy a link demand

Security transparency and inheritance have an interesting interaction, which is sometimes rather subtle. However, understanding it will lead us to a few more operations that require code be security critical.

Let's start with security critical types - when a type, such as SafeHandle, declares itself to be security critical it's saying that any use of that type is potentially security sensitive. This includes not only direct uses, such as creating instances and calling methods on the type, but also more subtle uses - such as deriving from the type. Therefore, a type must be security critical if it wants to:

  • Derive from a non-transparent type or implement a non-transparent interface.

If a base type has security critical virtual methods, it's interesting to think about what requirements we might want to place on overrides of those virtuals. At first glance there doesn't appear to be any security requirements for overriding these methods - after all, once you've overridden a method none of its code is going to execute, so the fact that it is security critical doesn't matter.

However, from the perspective of the caller of the security critical virtual method, it is actually rather important that any override of a critical virtual remain security critical.

To see why, let's take an example. X509Certificate provides an Import method which is security critical in the v4 release of the CLR. This method takes both the raw bytes of the certificate and the password necessary to gain access to the private key of that certificate.

Since the code on the other end of the virtual function call is going to be receiving sensitive information, such as a password and a certificate that may have a private key, it is by definition security sensitive.  The code which calls the Import virtual is passing this sensitive information through the call under the assumption that the method which will ultimately execute is itself trustworthy.  Therefore, it methods are security critical if they:

  • Override a security critical virtual or implement a security critical interface method

This is the final core transparency rule - the core set of things that are security sensitive and therefore require the code doing them to be security critical.

It's interesting to note that this list of critical operations:

  1. Call native code
  2. Contain unverifiable code
  3. Call critical code
  4. Perform security asserts
  5. Satisfy link demands
  6. Derive from non-transparent types
  7. Override security critical virtuals

Could also read as a list of operations that partial trust code cannot perform. In fact, in the v4 CLR we now force all partial trust code to be entirely transparent. Or, put another way, only full trust code can be security critical. This is very similar to the way that Silverlight requires that all user assemblies are entirely transparent, and only Silverlight platform assemblies can contain security critical code. This is one of the basic steps that allowed us to use security transparency as a security enforcement mechanism in Silverlight and the v4 desktop framework.

Comments

  • Anonymous
    April 11, 2011
    Examples of .NET 4 run time that can be accessed from javascript using this security?