次の方法で共有


Security Features vs Security Bugs

Several times when I've been talking with customers about implementing an SDL, or what they should be doing to secure their in-house developed applications, I get asked a similar branch of questions.

· Why do we need to review the design if we review the code?

· Why do we need to scan the code, if we are going to penetration test?

· But we are using an authentication library, and we are using PKI, so why do we need to do a security review?

A lot of this comes down to the difference between security features, and security bugs. This is an important distinction and hopefully this article will lend some clarity to why we do all the things we do in an SDL.

Security Features

Security Features are normally driven by regulation, policy, or best practice. Security Features are components you build into your system to handle certain aspects in order to control access to the system. An example of a security features is an authentication / authorisation system. You may be using Active Directory, Open LDAP, or some other kind of system to prove a users identity, and verify that they are allowed to access the things they are requesting access to. Implementing certificate based digital signatures, or using a centralised input validation system are other examples of security features.

Some of the things that identify security features for your application are architecture and design decisions, your choice of libraries or third party components to use, and taking advantage of platform features to provide secure access to your system.

Security features are normally tested by a test team during user acceptance testing. They determine if indeed access to certain assets is restricted to only authorised users, and inversely, that authorised users can get to everything they need to.

Some of the tools used to test security features are your regular testing suites and automated test tools. Test Director comes to mind as one of these. Or even the performance, or load tests build into Visual Studio Team Test.

Most people think that these things are what we mean by creating secure software and in fact this is a large part of it, but there is more to it. Consider this; did you implement your security features, securely?

Security Bugs

Security bugs come from implementing things incorrectly. They are most commonly thought of as vulnerabilities. They can be created in any component of the system and will be there regardless of the security components you build into your system. In actual fact, the security components you build can have security bugs. These are the things that attacker take advantage of to exploit systems.

Some of the things that identify a security bug are validation failures, cross site scripting vulnerabilities, unexpected input that allows your authorisation components to be bypassed. They are normally created by mistakes during development. They are in a word, bugs.

You test for security bugs with fuzz testing, penetration testing, and automated vulnerability scanners. This is where you are not testing for the presence of an authentication mechanism, but how can you break or bypass it. Attackers look for these kinds of problems.

The kinds of tools you use to look for these problems are static code analysis tools such as FX Cop, CAT.NET, and other commercially available tools such as Fortify 360. These are the kinds of problems that penetration tester look for and exploit with other tools like Nessus, Metasploit, and CANVAS.

Now a combination of the two, say failure to implement an input validation system, can compromise your perfectly implemented authentication / authorisation system. These implementation level bugs are more insidious because they are harder to detect, and easy to overlook, especially if project timeframes are short.

Don't confuse security features such as secure firewalled networks, AD integration, and Enterprise Library input validation, with security bugs which can occur in any of these. It is this distinction that drives the multi-level security review and testing approach.

Here is a table that will help define things:

Security Feature

Security Bug

Example

AD Integration, Enterprise Library Validation Block, Enterprise Library Logging, Input validation components, Firewalls, IPSec

SQL Injection, Cross Site Scripting, Cross Site Request Forgery, Buffer Overflows

Identified During

Architecture Review, Design Review, Threat Modeling,

Code Review, Code Scanning, Penetration Testing

How to test for

User testing, Unit Testing

Pen Testing, Fuzz Testing

Category

Component, Sub System, Dependencies, Library, API

Vulnerability, Mistake, Bug, Accident

Problem Prevention

Best Practice, Standards, Policies

Training, paying attention to detail

The next time a security guy asks you if you have put your developers through secure app dev training, or if you are using an SDL, and code scanning, he’s not referring to you using AD integration and if you have good data access control. He’s asking if you implemented them correctly.

So remember, make sure you do the right thing, and do the thing right.