共用方式為


Whidbey Security: Pen Testing Engagement

I mentioned in my last

blog that we’re heading towards a first step in the "Final Security Review" for Whidbey. A couple of weeks ago, our central security team finished a 9 week long Penetration Testing (pen-testing) effort of Whidbey. They had 8 people working just on our product. We began by doing "education" sessions for the security testing team, spending 1-3 hours on each area of the product. Based on these sessions and their expertise and experience, the testing team chose a number of areas to focus on. With a product as large as ours, it was impossible to cover everything. Remember that this is somewhat of a safety-net review. We have done extensive training and invested a lot of time in testing and reviewing our product ourselves.

During those 9 weeks, the test team sent out a report each week with the areas they have covered, and the issues they logged. They defined severities for issues they logged, 1 being "Critical" and 4 being "Low". Each issue has a description of their concern, and was assigned to the owner team. The agreement was that if they hit multiple critical issues in a component, they stop their testing and require that component to re-confirm their security work. The good news is that this didn’t happen. Not only that, we had no critical issues logged at all!! Now, obviously, this is not a full proof guarantee, but it’s another confirmation that we’ve done the right things.

Overall, they logged 13 "important" issues, 8 "moderate" issues and 9 "low" issues. Right now, the teams are analyzing the issues and forming their recommendations. But again, I’m really happy with this result. On a product the size and complexity as ours, these are really good results.

Our next step is to apply for the SWI Final Security Review signoff. This is being done through a sort of documentation tool. I will use the tool to document that he have met the requirements of

SDL (and a few others that we imposed on ourselves). The documentation involves supplying the team with "proof" like threat models, results of tools run on our product etc. One thing to remember is that throughout the engagement with SWI, we had a member of their team be a virtual member of our security group. This helped us be always in sync with the requirements and make sure we don’t miss any requirements. Right now, I fully expect us to pass with flying colors.

In parallel to this sign off, I’m working on publishing our release criteria for security. For the most part, it is the same as our Beta2 criteria. Since Beta2 allowed customers to "

go-live" with it, we could not afford to hold off any security work for post that release. We do, however, are constantly improving our tools, and from time to time we identify new potential attacks that we want to ensure we are not vulnerable to. So I have a couple of tools (FxCop and PreFast) that we’ve improved that I need to set the bar for. There were also another three requirements that our SWI buddy sent us that we have to factor in.

Another effort that I’m working on separately is getting our division to establish a dedicated security team. Basically, replicating a little SWI of our own. If we had dedicated pen-testers, we could go into a rotation model and ensure coverage for the whole product. We could also have them participate in the early phases of design to provide expert advice on security. I’m getting support for this, now we just need to find the right people.

I think my next blog on this will be when we have FSR sign off. I can tell you all about how we managed the late-coming changes and what lessons we’ve learned.