Dela via


Bridging the Gap Between Transparent and Critical Code

Last time we looked at the set of operations that can only be performed by security critical code. One interesting observation is that just because you are doing one of these operations does not mean that your method in and of itself is security sensitive. For instance, you might implement a method with unverifiable IL as a performance optimization - however that optimization is done in an inherently safe way.

Another example of a safe operation that uses security critical constructs is the Isolated Storage example from that post. Although Isolated Storage performs a security assert, which is security sensitive and requires it to be critical, it makes this assert safe by several techniques including issuing a demand for IsolatedStoragePermission.

Similarly, the file classes might use P/Invokes in order to implement their functionality. However, they ensure that this is safe by issuing a demand for FileIOPermission in order to ensure that they are only used in sandboxes which explicitly decided to allow access to the file system.

In these cases, you might want transparent code to be able to call into your method, since the fact that it is doing something critical is more of an implementation detail than anything else. These methods form the boundary between the security sensitive portions of your code and the security transparent portions, and are marked as Security Safe Critical.

A security safe critical method can do everything that a security critical method can do, however it does not require that its caller (or overriding methods) be security critical themselves. Instead, it takes on the responsibility of validating that all of its operations are safe. This includes (but is not limited to):

  1. Verifying that the core operations it is performing are safe. A method that formats the hard disk would never be security safe critical for instance.
  2. Verifying that the inputs that the method uses make sense. For example, Isolated Storage only allows access to paths within the Isolated Storage root and rejects attempts to use its APIs to open arbitrary files on the machine.
  3. Verifying that the outputs are also safe. This includes the obvious: return values, output and reference parameters. However, non-obvious results of operations are also included here. For example, exceptions thrown or even state transitions of objects need to be safe as well. If outside code can observe a change based upon using a safe critical method, then that safe critical method is responsible for ensuring that exposing this change is something safe to do.

With this in mind, another interesting observation can be made. Since the security safe critical layer of code is the bridge between transparent code and security critical code, it really forms the attack surface of a library.

This means that upon adopting the security transparency model, an APTCA library's audit burden really falls mostly upon the security safe critical surface of the library. If any of the safe critical code in the assembly is not correctly verifying operations, inputs, or outputs, then that safe critical method is a security hole that needs be closed.

Conversely, since transparent (and therefore, in .NET 4, partially trusted) code cannot directly call through to security critical code the audit burden on this code is significantly reduced. Similarly, since transparent code cannot be doing any security sensitive operations it also has a significantly reduced security audit burden.

By using security transparency, therefore, it becomes very easy to identify the sections of code that need to be focused on in security review in order to make sure that the shipping assembly is secure.

(Our internal security reviewers sometimes jokingly refer to the SecuritySafeCriticalAttribute as the BigRedFlagAttribute)