다음을 통해 공유


Cc307418.Verification(en-us,MSDN.10).png

Phase 4: Verification

During the Verification phase, you ensure that your code meets the security and privacy tenets you established in the previous phases. This is done through security and privacy testing, and a security push—which is a team-wide focus on threat model updates, code review, testing, and thorough documentation review and edit. A public release privacy review is also completed during the Verification phase.

On This Page

Security and Privacy Testing
Security Requirements
Security Recommendations
Privacy Recommendations
Resources
Security Push
Push Preparation
Push Duration
Security Requirements
Privacy Requirements
Security Recommendations
Resources

Security and Privacy Testing

Security testing addresses two broad areas of concern:

  • Confidentiality, integrity, and availability of the software and data processed by the software. This area includes all features and functionality designed to mitigate threats as described in the threat model.

  • Freedom from issues that could result in security vulnerabilities. For example, a buffer overrun in code that parses data could be exploited in ways that have security implications.

Begin security testing very soon after the code is written. This testing stage requires one full test pass after the verification stage because potential issues and vulnerabilities might change during development.

Security testing is important to the Security Development Lifecycle. As Michael Howard and David LeBlanc note, in Writing Secure Code , Second Edition, “The designers and the specifications might outline a secure design, the developers might be diligent and write secure code, but it’s the testing process that determines whether the product is secure in the real world.”

Security Requirements

  • Where input to file parsing code could have crossed a trust boundary, file fuzzing must be performed on that code. All issues must be fixed as described in the Security Development Lifecycle (SDL) Bug Bar. Each file parser is required to be fuzzed using a recommended tool.

    • Win32/64/Mac: An Optimized set of templates must be used. Template optimization is based on the maximum amount of code coverage of the parser with the minimum number of templates. Optimized templates have been shown to double fuzzing effectiveness in studies. A minimum of 500,000 iterations, and have fuzzed at least 250,000 iterations since the last bug found/fixed that meets the SDL Bug Bar
    • WinCE and Xbox: 100,000 bug free iterations, since the last bug found/fixed that meets the SDL Bug Bar. All file fuzzing bugs must be filed and triaged according to the SDL Bug Bar’s guidance.
  • If the program exposes remote procedure call (RPC) interfaces, you must use an RPC fuzzing tool to test for problems. You can find RPC fuzzers on the Internet. This requirement applies only to programs that expose RPC interfaces. All fuzz testing must be conducted using “retail” (not debug) builds and must correct all issues as described in the SDL Bug Bar.

  • If the project uses ActiveX controls, use an ActiveX fuzzer to test for problems ActiveX controls pose a significant security risk and require fuzz testing. You can find ActiveX fuzzers on the Internet. Conduct all fuzz testing using “retail” (not debug) builds, and correct all issues as described in the SDL Privacy Bug Bar (Sample) and SDL Security Bug Bar (Sample) appendices.

  • Satisfy Win32 testing requirements as described in Appendix J: SDL Requirement: Application Verifier. The Application Verifier is easy to use and identifies  issues that are MSRC patch class issues in unmanaged code. AppVerifier requires a modest resource investment and should be used throughout the testing cycle. AppVerifier is not optimized for managed code.

  • Define and document the security bug bar for the product. Verify that a security bug bar has been established approved. Vulnerabilities include:

    • Elevation of privilege (the ability either to execute arbitrary code or to obtain more privilege than intended)
    • Denial of service
    • Targeted information disclosure (where the attacker can locate and read information from anywhere on the system, including system information, that was not intended or designed to be exposed)
    • Spoofing
    • Tampering (permanent modification of any user data or data used to make trust decisions in a common or default scenario that persists after restarting the operating system or application)
  • For online services and/or LOB applications, use approved cross-site scripting scanning test tools with the bug tracking system, and enter all vulnerabilities found into your bug tracking system. All vulnerabilities must be addressed prior to the Final Security Review.

  • For online services and/or LOB applications that implement web services, use an approved scanner to check for XML parsing problems.

  • Complete testing for kernel-mode drivers. The product team must complete the following testing for every kernel-mode driver:

    Driver Verifier

    1. Using Windows Vista or Windows Server® 2008, complete a full functional test on the driver with Driver Verifier enabled using /standard mode.
    2. Execute all code paths in the driver with Driver Verifier enabled using /standard mode.

    Device Path Exerciser

    1. Run Device Path Exerciser specifically against each driver in the product (using the /dr parameter).
    2. Run Device Path Exerciser with Driver Verifier enabled.

    To meet the exit criteria, every kernel-mode driver in the product must pass the Driver Verifier and Device Path Exerciser tests. Driver Verifier is available in the Windows Driver Kit, see Driver Development Tools -> Tools for Verifying Drivers -> Driver Verifier or, on MSDN, see https://msdn.microsoft.com/en-gb/library/ff545448.aspx. Device Path Exerciser is available in the Windows Driver Kit, see Driver Development Tools -> Tools for Testing Drivers -> Device Path Exerciser or, on MSDN, see https://msdn.microsoft.com/en-gb/library/ff544851.aspx.

  • COM object testing. Any product that ships a registered COM object must meet the following minimum criteria:

    1. COM objects must be compiled and tested with the SDL required switches enabled (for example, a COM object must be tested with NX and ASLR flags applied to the control and on a machine with NX and ASLR enabled).
    2. All methods in a COM object's supported interfaces must execute without access violations when called with valid data.
    3. COM objects must follow all published rules on reference counting. See the MSDN documentation on Addref and Release.
    4. COM objects must be tested for reliable query, instantiation, and interrogation by any COM container without returning an invalid pointer, leaking memory, or causing access violations.
    5. COM objects must follow the published rules for QueryInterface.
  • (Web applications only) If a site provides any authenticated access, then the crossdomain.xml or clientaccesspolicy.xml files for the site must only allow specifically enumerated authorized sites (that is, no wildcards). When using JavaScript, do not set document.domain to a shared top-level domain (for example, microsoft.com). Use a more specific domain instead. Exit criteria is as follows:

    Read-Only Unauthenticated Sites and Services
    Sites and Web services that do not require authentication and provide read-only information have no action items for this requirement. However, keep in mind that policy files are site-wide, so a policy meant for an unauthenticated site will also apply to any other sites on the same server. If the application is a public service that could be used in mashups, other Web services, or Flash or Silverlight™ applications and thus requires a permissive crossdomain.xml or accesspolicy.xml file (one allowing * or a broad top-level domain, like msn.com or live.com), then interactive Web sites or authenticated APIs may not be hosted on the same domain.

    Authenticated Web Sites
    If an application is a standard Web UI (not a service) that hosts Web services for its own use, or has Flash and Silverlight® components on the site, any crossdomain.xml or clientaccesspolicy.xml file in the root directory must allow access only to the sites that contain the appropriate Flash and Silverlight components or Web services.

    Authenticated Web Services
    If a site has functions available only to authenticated users but also needs to be accessed by a Flash or Silverlight application, ensure that any Flash or Silverlight applications that the site uses load the policy file only from the root directory of the site, and ensure that the value of does not set domain="*". In addition, if such a site must be accessed by Silverlight applications, ensure a clientaccesspolicy.xml that allows only the desired sites is present, since Silverlight does not honor Flash crossdomain.xml files with policies other than "*". Authenticated sites with Flash and Silverlight front-ends must always use crossdomain.xml or clientaccesspolicy.xml to restrict access, since an open policy (domain="*") will allow any Internet site the user visits to take action as the user.

    JavaScript
    Scripts setting document.domain to any value should be validated to ensure that:

    1. The site checks that the caller is on a list of allowed sites before setting document.domain.
    2. If the site deals with PII in any way, document.domain is not set to a top-level domain (for example, live.com) but only to an appropriate subdomain (for example, billing.live.com).
  • Perform Application Verifier tests. Test all discrete applications within a shipping product for heap corruption and Win32 resource issues that might lead to security and reliability issues. You can detect these issues using AppVerifier. Exit Criteria: All tests in the application's functional test suite have been run under AppVerifier, and all issues have been fixed.

  • Network fuzzing. Fuzzing of network interfaces is one of the primary tools of security researchers and attackers, and network facing applications are arguably the most easily accessed target for a remote attacker. Each network parser must successfully handle 100,000 malformed packets without error.

  • Binary analysis. If obfuscated binaries are being shipped, BinScope must be run on the pre-obfuscated version of each binary instead of the obfuscated version to ensure that issues were identified correctly.

Security Recommendations

  • Create and complete security testing plans that address these issues:

    • Security features and functionality work as specified. Ensure that all security features and functionality that are designed to mitigate threats perform as expected.
    • Security features and functionality cannot be circumvented. If a mitigation can be bypassed, an attacker can try to exploit software weaknesses, rendering security features and functionality useless.
    • Ensure general software quality in areas that can result in security vulnerabilities. Validating all data input and parsing code against malformed or unexpected data is a common way attackers try to exploit software. Data fuzzing is a general testing technique that can help prevent such attacks.
  • Penetration testing. Use the threat models to determine priorities, test, and attack the software as a hacker might. Use existing tools or design new tools if needed.

    • Hire third-party security firms as appropriate. Depending on the business goals for your project and availability of resources, consider engaging an external security firm for a security review and/or penetration testing.
  • Develop and use vulnerability regression tests. If the code has ever had a security vulnerability reported, it is strongly suggested that you add regression tests to the test suite for that component to ensure that similar vulnerabilities are not inadvertently re-introduced to the code. Similarly, if there are other products with similar functionality in the market that have suffered publicly reported vulnerabilities, add tests to the test plan to prevent similar vulnerabilities.

  • For online services and/or LOB applications, conduct data flow testing. Any externally accessible pages and interfaces must have tests. This should include pages that automatically redirect.

  • Run through your test cases with WinHTTP, the debug version of wininet, or another application that captures all page transitions. Make sure that no part of the flow can be bypassed.

  • If the feature exposes SOAP or DCOM interfaces or any other services, these must also be tested. Ensure that no step can be skipped or bypassed.

  • If your feature requires authenticating a user before providing access, ensure that it is not possible to bypass this authentication step by directly connecting to the backend.

  • For online services and/or LOB applications, conduct replay testing. Replay all messages for any scenario you are responsible for to ensure that the expected outcome occurs. For example, try and change the password and then repeat, or attempt to reuse security tokens in other contexts (for example, try using a login token in a password reset flow).

  • For online services and/or LOB applications, cover input validation testing scenarios and variants. Do not do this through a web browser, since it will honor server-specified field lengths. Test cases must cover the following scenarios:

    • Random inputs. Ensure that a full range of ASCII and Unicode characters are used. All verification should be “allow” based instead of “block” based (deny everything that is not explicitly allowed).

    • Large inputs. Large strings should be attempted.

    • Script injection.

    • SQL injection.

    • Path traversal. Try and pass filenames, like ../../../../../../../boot.ini, to bypass directory access controls.

    • Malformed XML blobs. Attempt to submit XML that does not match the target schema, if your feature uses XSL attempt to pass XSL processing instructions within your input. Note that this is best done either using valid, but slightly incorrect, XML data to bypass the .NET validation code or disabling .NET validation checks before testing. All final release code to be used in production environments must not disable XML and other validation code in .NET.

  • Secure Code Review. Security code reviews are a critical component of the Security Development Lifecycle. Given the opportunity to review old code or work on a new cool feature, developers lean towards the latter. Unsurprisingly, attackers don't target only new functionality; they will attack all code, regardless of its age. Waiting to make the code more secure in the next version of the product is not a good solution for protecting customers, and therefore, high-risk items (Critical) that are considered the most sensitive and important for security should be reviewed in depth at the earliest opportunity.

    Determine the most at-risk components (Critical) and perform an in-depth security review of the code making up those components. For critical components or if time allows, also review Important items. Use the following guidelines to determine the most at-risk components.

    1. Define the code review priority based on these criteria:

      • Critical code is considered to be the most sensitive from a security standpoint. The following are examples of Critical code, but please note this is not necessarily a definitive list. Critical code is all Internet- or network-facing code, code in the Trusted Computing Base (TCB)—such as kernel or SYSTEM code, code running as administrator or Local System, code running as an elevated user (also includes LocalService and NetworkService), or features with a prior history of vulnerability, regardless of version. Any code that handles secret data, such as encryption keys and passwords, is considered Critical code. For managed code, Critical code is considered to be any unverifiable code (any code that the standard PEVerify.exe tool reports as not verified). All code supporting functionality exposed on the maximum attack surface is considered Critical code by definition.

      • Important code is optionally installed code that runs with user privilege, or code that is installed by default that doesn't meet the Critical criteria.

      • Moderate code is rarely used code and setup code. Setup code that handles secret data, such as encryption keys and passwords, is always considered Critical code.

      • Any code or component with high rates of security bug discovery is considered to be Critical code, even if it otherwise maps to Important or Moderate per the previous definitions. While the definition of high rates is subjective within the team, it is important to examine the portions of code that have experienced the highest rates of security issues with extra scrutiny

      • Don't forget to include and prioritize all sample code shipped with the product. While generalized guidelines are difficult, consider how customers will be using the samples. Samples that are expected to be compiled and used with little changes in production environments should be considered Critical. "Hello World" applications are more likely to be considered Moderate code.

    2. Identify development and testing owners for everything in products.

    The following are required to meet this security recommendation:

    • All Critical source code should be thoroughly reviewed by inspection teams and code-scanning tools.
    • All Important code should be reviewed using code-scanning tools and some human analysis.
    • Development owners for all source code and testing owners for all binaries have been identified, documented, and archived.
    • All source code is assessed and assigned a severity—Critical, Important, Moderate or Low. This information is recorded in a document or spreadsheet and is archived.
  • Use a passive security auditor. Use Watcher and Fiddler to detect vulnerabilities. Browse through every page in your web application (or run a prerecorded Web macro that hits every page) with the Watcher plug-in for Fiddler enabled. If Watcher finds any potential vulnerabilities, you must fix them. Repeat this process until the run is completed with no flagged issues.

Privacy Recommendations

  • For P1 and P2 projects, include privacy testing in your master test plan. Privacy testing of platform components deployed in organizations should include verification of organizational policy controls that affect privacy (these controls are listed in Appendix C: SDL Privacy Questionnaire). Privacy testing for features that transfer data over the Internet should include monitoring network traffic for unexpected network calls.

Resources

Security Push

A security push is a team-wide focus on threat model updates, code review, testing, and thorough documentation review and edit. A security push is not a substitute for a lack of security discipline. Rather, it is an organized effort to uncover changes that might have occurred during development, improve security in any legacy code, and identify and remediate any remaining vulnerabilities. However, it should be noted that it is not possible to build security into software with only a security push.

A security push occurs after a product has entered the verification stage (reached code/feature complete). It usually begins at about the time beta testing starts. Because the results of the security push might alter the default configuration and behavior of a product, you should perform a final beta test review after the security push is complete and after all issues and required changes are resolved.

It is important to note that the goal of a security push is to find vulnerabilities, not to fix them. The time to fix vulnerabilities is after you complete the security push.

Push Preparation

A successful push requires planning:

  • You should allocate time and resources for the push in your project’s schedule, before you begin development. Rushing the security push will cause problems or delays during the Final Security Review.

  • Your team’s security coordinator should determine what resources are required, organize a security push leadership team, and create the needed supporting materials and resources.

  • The security representative should determine how to communicate security push information to the rest of the team. It is helpful to establish a central intranet location for all information related to the push, including news, schedules, plans, forms and documents, white papers, training schedules, and links. The intranet site should link to internal resources that help the group execute the security push. This site should serve as the primary source of information, answers, and news for employees during the push.

  • There must be well-defined criteria to determine when the push is complete.

Your team will need training before the push. At a minimum, this training should help team members understand the intent and logistics of the push itself. Some members might also require updated security training and training in security or analysis techniques that are specific to the software that is undergoing the push. The training should have two components—the push logistics, delivered by a senior member of the team conducting the push, and technical and role-specific security training.

Push Duration

The amount of time, energy, and team-wide focus that a security push requires differs depending on the status of the code base and the amount of attention the team has given to security earlier in development. A security push requires less time if your team has:

  • Rigorously kept all threat models up to date.

  • Actively and completely subjected those threat models to penetrations testing.

  • Accurately tracked and documented attack surfaces and any changes made to them.

  • Completed security code reviews for all high-priority code (see discussion later in this section for details about how priority is assessed).

  • Identified and documented development and testing contacts for all code released with the product.

  • Rigorously brought all legacy code up to current security standards.

  • Validated the security documentation plan.

The duration of a security push is determined by the amount of code that needs to be reviewed for security. Try to conduct security code reviews throughout development, after the code is fairly stable. If you try to condense too many code reviews into too brief a time period, the quality of code reviews suffers. In general, a security push is measured in weeks, not days. You should aim to complete the push in three weeks and extend the time as necessary.

Security Requirements

  • Review and update threat models. Examine the threat models that were created during the design phase. If circumstances prevented creation of threat models during design phase, you must develop them in the earliest phase of the security push.

  • Review all bugs that affect security against the security bug bar. Ensure that all security bugs contain the security bug bar rating.

Privacy Requirements

Review and update the SDL Privacy Questionnaire form (Appendix C) for any material privacy changes that were made during the implementation and verification stages. Material changes include:

  • Changing the style of consent

  • Substantively changing the language of a notice

  • Collecting different data types

  • Exhibiting new behavior

Security Recommendations

  • Conduct security code reviews for at-risk components. Use the following information to help determine which components are most at risk, and use this determination to set priorities for security code review. High-risk items (Critical) must be reviewed earliest and most in depth. For a minimal checklist for security issues to be aware of during code reviews, see "Appendix D: A Developer’s Security Checklist" in Writing Secure Code , Second Edition (p. 731).

  • Identify development and testing owners for everything in the program. Identify a development owner for each source code file. Identify a quality assurance owner for each binary file. Record this information in a document or spreadsheet and use a document/source tracking system to store it.

  • Prioritize all code before you start the push. Track severity ratings in a document or spreadsheet that lists the development and quality assurance owners. Subject all code to the same criteria for prioritization, including legacy code. Many security vulnerabilities have come from legacy code that was created before the introduction of security pushes, threat modeling, and the other processes that are included in the Security Development Lifecycle.

  • Ensure that you include and prioritize all sample code shipped with the product. Consider how users will use the samples. Samples that are expected to be compiled and used with small changes in production environments should be considered Critical.

  • Re-evaluate the attack surface of the software. It is important to re-evaluate your team’s definition of attack surface during the security push. You should be able to calculate the attack surface based on information described in the design specifications for the software. Measurement of the attack surface enables you to understand which components have direct exposure to attack and the highest risk of damage if a security breach occurs. Focus effort on areas of highest risk areas, and take appropriate corrective actions. These actions might include:

    • Prolonging the push for especially error-prone components.

    • Deciding not to ship a component until it is corrected.

    • Disabling a component by default.

    • Re-designating a component for future removal from the software (deprecating it).

    • Modifying development practices to make vulnerabilities less likely to be introduced by future modifications or new developments.

After you evaluate the attack surface, update attack surface documentation as appropriate.

  • As time permits, consider code reviews for all components tagged with a severity level of Important.

  • Review the security documentation plan. Examine how any changes to the product design during development have affected security documentation. Ensure that the security documentation plan will meet all user needs.

  • Focus the entire team on the push. When team members finish reviewing and testing their own components, they should help others in the group.

Code severity definitions are provided in the following list:

  • Critical code is considered the most sensitive from a security standpoint. The following examples of Critical code are not necessarily a definitive list:

    • All Internet-facing or network-facing code.

    • Code in the Trusted Computing Base (TCB) (for example, kernel or SYSTEM code).

    • Code running as administrator or Local System.

    • Code running as an elevated user (including LocalService and NetworkService).

    • Features with a history of vulnerability, regardless of version.

    • Any code that handles secret data, such as encryption keys and passwords.

    • Any unverifiable managed code (any code that the standard PEVerify.exe tool reports as not verified).

    • All code supporting functionality exposed on the maximum attack surface.

  • Important code is optionally installed code that runs with user privilege or code that is installed by default that does not meet the Critical criteria.

  • Moderate code is rarely used code and setup code. (Setup code that handles secret data, such as encryption keys and passwords, is always considered Critical code.)

  • Any code or component that has experienced large numbers of security issues is considered Critical code, even if it would otherwise be considered Important or Moderate. Although the definition of large numbers is subjective, it is important to scrutinize carefully the portions of code that contain the most security vulnerabilities.

Resources

Content Disclaimer

This documentation is not an exhaustive reference on the SDL process as practiced at Microsoft. Additional assurance work may be performed by product teams (but not necessarily documented) at their discretion. As a result, this example should not be considered as the exact process that Microsoft follows to secure all products.

This documentation is provided “as-is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it.

This documentation does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.

© 2012 Microsoft Corporation. All rights reserved.

Licensed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported