次の方法で共有


Threat Modeling Again, Threat Modeling Rules of Thumb

I wrote this piece up for our group as we entered the most recent round of threat models.  I've cleaned it up a bit (removing some Microsoft-specific stuff), and there's stuff that's been talked about before, but the rest of the document is pretty relevant. 

 

---------------------------------------

As you go about filling in the threat model threat list, it’s important to consider the consequences of entering threats and mitigations.  While it can be easy to find threats, it is important to realize that all threats have real-world consequences for the development team.

At the end of the day, this process is about ensuring that our customer’s machines aren’t compromised. When we’re deciding which threats need mitigation, we concentrate our efforts on those where the attacker can cause real damage.

 

When we’re threat modeling, we should ensure that we’ve identified as many of the potential threats as possible (even if you think they’re trivial). At a minimum, the threats we list that we chose to ignore will remain in the document to provide guidance for the future. 

 

Remember that the feature team can always decide that we’re ok with accepting the risk of a particular threat (subject to the SDL security review process). But we want to make sure that we mitigate the right issues.

To help you guide your thinking about what kinds of threats deserve mitigation, here are some rules of thumb that you can use while performing your threat modeling.

1. If the data hasn’t crossed a trust boundary, you don’t really care about it.

2. If the threat requires that the attacker is ALREADY running code on the client at your privilege level, you don’t really care about it.

3. If your code runs with any elevated privileges (even if your code runs in a restricted svchost instance) you need to be concerned.

4. If your code invalidates assumptions made by other entities, you need to be concerned.

5. If your code listens on the network, you need to be concerned.

6. If your code retrieves information from the internet, you need to be concerned.

7. If your code deals with data that came from a file, you need to be concerned (these last two are the inverses of rule #1).

8. If your code is marked as safe for scripting or safe for initialization, you need to be REALLY concerned.

 

Let’s take each of these in turn, because there are some subtle distinctions that need to be called out.

If the data hasn’t crossed a trust boundary, you don’t really care about it.

For example, consider the case where a hostile application passes bogus parameters into our API. In that case, the hostile application lives within the same trust boundary as the application, so you can simply certify the threat. The same thing applies to window messages that you receive. In general, it’s not useful to enumerate threats within a trust boundary. [Editors Note: Yesterday, David LeBlanc wrote an article about this very issue - I 100% agree with what he says there.] 

But there’s a caveat (of course there’s a caveat, there’s ALWAYS a caveat). Just because your threat model diagram doesn't have a trust boundary on it, it doesn't mean that the data being validated hasn't crossed a trust boundary on the way to your code.

Consider the case of an application that takes a file name from the network and passes that filename into your API. And further consider the case where your API has an input validation bug that causes a buffer overflow. In that case, it’s YOUR responsibility to fix the buffer overflow – an attacker can use the innocent application to exploit your code. Before you dismiss this issue as being unlikely, consider CVE-2007-3670. The Firefox web browser allows the user to execute scripts passed in on the command line, and registered a URI handler named “firefoxurl” with the OS with the start action being “firefox.exe %1” (this is a simplification). The attacker simply included a “firefoxurl:<javascript>” in a URL and was able to successfully take ownership of the client machine. In this case, the firefox browser assumed that there was no trust boundary between firefox.exe and the invoker, but it didn’t realize that it introduced such a trust boundary when it created the “firefoxurl” URI handler.

If the threat requires that the attacker is ALREADY running code on the client at your privilege level, you don’t really care about it.

For example, consider the case where a hostile application writes values into a registry key that’s read by your component. Writing those keys requires that there be some application currently running code on the client, which requires that the bad guy first be able to get code to run on the client box.

While the threats associated with this are real, it’s not that big a problem and you can probably state that you aren’t concerned by those threats because they require that the bad guy run code on the box (see Immutable Law #1: “If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore”).

Please note that this item has a HUGE caveat: it ONLY applies if the attacker’s code is running at the same privilege level as your code. If that’s not the case, you have the next rule of thumb:

If your code runs with any elevated privileges, you need to be concerned.

We DO care about threats that cross privilege boundaries. That means that any data communication between an application and a service (which could be an RPC, it could be a registry value, it could be a shared memory region) must be included in the threat model.

Even if you’re running in a low privilege service account, you still may be attacked – one of the privileges that all services get is the SE_IMPERSONATE_NAME privilege. This is actually one of the more dangerous privileges on the system because it can allow a patient attacker to take over the entire box. Ken “Skywing” Johnson wrote about this in a couple of posts on his blog (1 and 2) on his excellent blog Nynaeve. David LeBlanc has a subtly different take on this issue (see here), but the reality is that both David and Ken agree more than they disagree on this issue. If your code runs as a service, you MUST assume that you’re running with elevated privileges. This applies to all data read – rule #2 (requiring an attacker to run code) does not apply when you cross privilege levels, because the attacker could be writing code under a low privilege account to enable an elevation of privilege attack.

In addition, if your component has a use scenario that involves running the component elevated, you also need to consider that in your threat modeling.

If your code invalidates assumptions made by other entities, you need to be concerned

The reason that the firefoxurl problem listed above was such a big deal was that the firefoxurl handler invalidated some of the assumptions made by the other components of Firefox. When the Firefox team threat modeled firefox, they made the assumption that Firefox would only be invoked in the context of the user.  As such it was totally reasonable to add support for executing scripts passed in on the command line (see rule of thumb #1).  However, when they threat modeled the firefoxurl: URI handler implementation, they didn’t consider that they had now introduced a trust boundary between the invoker of Firefox and the Firefox executable.  

So you need to be aware of the assumptions of all of your related components and ensure that you’re not changing those assumptions. If you are, you need to ensure that your change doesn’t introduce issues.

If your code retrieves information from the internet, you need to be concerned

The internet is a totally untrusted resource (no duh). But this has profound consequences when threat modeling. All data received from the Internet MUST be treated as totally untrusted and must be subject to strict validation.

If your code deals with data that came from a file, then you need to be concerned.

In the previous section, I talked about data received over the internet. Microsoft has issued several bulletins this year that required an attacker tricking a user into downloading a specially crafted file over the internet; as a consequence, ANY file data must be treated as potentially malicious. For example, MS07-047 (a vulnerability in WMP) required that the attacker force the user to view a specially crafted WMP skin. The consequence of this is that that ANY file parsed by our code MUST be treated as coming from a lower level of trust.

Every single file parser MUST treat its input as totally untrusted –MS07-047 is only one example of an MSRC vulnerability, there have been others. Any code that reads data from a file MUST validate the contents. It also means that we need to work to ensure that we have fuzzing in place to validate our mitigations.

And the problem goes beyond file parsers directly. Any data that can possibly be read from a file cannot be trusted. <A senior developer in our division> brings up the example of a codec as a perfect example. The file parser parses the container and determines that the container isn't corrupted. It then extracts the format information and finds the appropriate codec for that format. The parser then loads the codec and hands the format information and file data to the codec.

The only thing that the codec knows is that the format information that’s been passed in is valid. That’s it. Beyond the fact that the format information is of an appropriate size and has a verifiable type, the codec can make no assumptions about the contents of the format information, and it can make no assumptions about the file data. Even though the codec doesn’t explicitly parse the file, it’s still dealing with untrusted data read from the file.

If your code is marked as “Safe For Scripting” or “Safe for Initialization”, you need to be REALLY concerned.

If your code is marked as “Safe For Scripting” (or if your code can be invoked from a control that is marked as Safe For Scripting), it means that your code can be executed in the context of a web browser, and that in turn means that the bad guys are going to go after your code. There have been way too many MSRC bulletins about issues with ActiveX controls.

Please note that some of the issues with ActiveX controls can be quite subtle. For instance, in MS02-032 we had to issue an MSRC fix because one of the APIs exposed by the WMP OCX returned a different error code if a path passed into the API was a file or if it was a directory – that constituted an Information Disclosure vulnerability and an attacker could use it to map out the contents of the users hard disk.

In conclusion

Vista raised the security bar for attackers significantly. As Vista adoption spreads, attackers will be forced to find new ways to exploit our code. That means that it’s more and more important to ensure that we do a good job ensuring that they have as few opportunities as possible to make life difficult for our customers.  The threat modeling process helps us understand the risks associated with our features and understand where we need to look for potential issues.

Comments

  • Anonymous
    September 21, 2007
    PingBack from http://msdnrss.thecoderblogs.com/2007/09/21/threat-modeling-again-threat-modeling-rules-of-thumb/

  • Anonymous
    September 24, 2007
    The comment has been removed

  • Anonymous
    September 24, 2007
    @Bill: Java and safe languages are not yet ready to be used for system-level software.  They have hard-to-control failure conditions that make them unreliable in low-memory or other resource-constrained situations.  At least this is true in the CLR (may or may not be true in Java).   Not to forget that the long legacy of systems out there have been written in C and other unsafe languages.  Microsoft would be reinventing the wheel even more if they were to move to a microkernel or to a safe OS (MSR is doing this with Singularity).   Lastly, buffer overflows are checked for automatically in MS compilers.  Look up PreFAST and the /analyze compiler switch sometime.  These tools aren't perfect, but they do catch a number of bugs.  For another interesting tool that is being applied at Microsoft, look up "Automated Whitebox Fuzz Testing" in your favorite search engine.   Threat modeling, as Larry describes it, is just a way for Microsoft to focus more manual resources on reviewing sensitive code for bugs which the automated tools may have missed.

  • Anonymous
    October 01, 2007
    Bill's comments above in some ways reiterate Larry's rule number 2: If an attacker has already gotten you to run his code, you don't care what exactly that code can do -- you've already lost. Therefore, you need to be really concerned (paranoid, actually) about threats that allow an attacker to run arbitrary code without your consent. But it's a losing battle to try to stop that arbitrary code, once running, from doing evil things.

  • Anonymous
    October 01, 2007
    I want to wrap up the threat modeling posts with a summary and some comments on the entire process. Yeah,

  • Anonymous
    October 01, 2007
    Larry, keep up the good work, I wish we had more people like you in Redmond.

  • Anonymous
    October 01, 2007
    Please stop using, stop quoting, and stop teaching "Immutable Law #1."  It's simply wrong, and it's deceptively named. One of the important jobs of an operating system is to isolate and protect applications from one another.  To assume the so-called "Immutable Law #1" is to pretend that this responsibility doesn't exist.  Yes -- it is an assumption, not a law, and to call it "immutable" is to mislead the reader into accepting a broken security model instead of demanding better. I'm sure you've heard of encapsulation?  Defense in depth?  How about the principle of least privilege?  The thinking behind Law #1 -- that when a user runs a program, that program automatically gets full rights to pull all the user's privileges out of thin air and exercise them in whatever way it wants -- runs counter to all of these fundamental security concepts. A revision of your Law that's somewhat closer to the truth would be something like: "If a bad guy can persuade you to run his program on your computer, and your operating system allows that program to damage the system, other programs, or your data, it's not your computer anymore."  The operating system does not get a free pass.

  • Anonymous
    October 01, 2007
    The comment has been removed

  • Anonymous
    October 01, 2007
    The comment has been removed

  • Anonymous
    October 01, 2007
    The comment has been removed

  • Anonymous
    October 05, 2007
    Ping is right.  There is no reason every program I run needs all my privileges.  I always launch my browser in a restricted user account using a variant of runAs.  More than once a malicious script has run that didn't do anything worse than make me delete the account and create a new one.  In other words, a bad guy persuaded me to run his program on my computer and it was still my computer.  Sounds like Law #1 isn't so immutable after all. (Full disclosure: Ping and I worked together for a while.)

  • Anonymous
    October 22, 2007
    Adam again. I hope you&#x2019;re still enjoying this as we hit #5 in the threat modeling series. In my