Freigeben über


Defend Your Code with Top Ten Security Tips Every Developer Must Know

Defend Your Code with Top Ten Security Tips Every Developer Must Know

by Michael Howard and Keith Brown

Security is a multidimensional issue. Security risks can come from anywhere. You could write bad error handling code or be too generous with permissions. You could forget what services are running on your server. You could accept all user input.

1. Trust User Input at Your Own Peril

2. Protect Against Buffer Overruns

3. Prevent Cross-site Scripting

4. Don't Require sa Permissions

5. Watch that Crypto Code!

6. Reduce Your Attack Profile

7. Employ the Principle of Least Privilege

8. Pay Attention to Failure Modes

9.Impersonation is Fragile

10. Write Apps that Non-admins Can Actually Use

 

1. Trust User Input at Your Own Peril

Always remember one thing: "don't trust user input." If you always assume that data is well formed and good, then your troubles are about to begin. Most security vulnerabilities revolve around
the attacker providing malformed data to the server machine. Trusting that input is well formed can lead to buffer overruns, cross-site scripting attacks, SQL injection attacks, and more.

2. Protect Against Buffer Overruns

A buffer overrun occurs when the data provided by the attacker is bigger than what the application expects, and overflows into internal memory space. Buffer overruns are primarily a C/C++ issue. The overflow causes corruption of other data structures in memory, and this corruption can often lead to the attacker running malicious code. There are also buffer underflows and buffer overruns caused by array indexing mistakes, but they are less common. Take a look to the following source code example:

void DoSomething(char *cBuffSrc, DWORD cbBuffSrc) {

    char cBuffDest[32];

    memcpy(cBuffDest,cBuffSrc,cbBuffSrc);

}

If the data comes from an untrusted source and has not been validated, then the attacker (the untrusted source) could easily make cBuffSrc larger than cBuffDest, and also set cbBuffSrc to be larger than cBuffDest. When memcpy copies the data into cBuffDest, the return address from DoSomething is clobbered because cBuffDest is next to the return address on the function's
stack frame, and the attacker makes the code perform malicious operations.

The way to fix this is to distrust user input and not to believe any data held in cBuffSrc and cbBuffSrc:

void DoSomething(char *cBuffSrc, DWORD cbBuffSrc) {

    const DWORD cbBuffDest = 32;

    char cBuffDest[cbBuffDest];

#ifdef
_DEBUG

    memset(cBuffDest, 0x33, cbBuffSrc);

#endif

    memcpy(cBuffDest, cBuffSrc, min(cbBuffDest,
cbBuffSrc));

}

 

3. Prevent Cross-site Scripting

Cross-site scripting vulnerabilities are Web-specific issues and can compromise a client's data through a flaw in a single Web page. Imagine the following ASP.NET code fragment:

<script language=c#>

    Response.Write("Hello, " +
Request.QueryString("name"));

</script>

How many of you have seen code like this? You may be surprised to learn it's buggy! Normally, a user would access this code using a URL that looks like this:

https://explorationair.com/welcome.aspx?name=Michael

The C# code assumes that the data is always well formed and contains nothing more than a name. Attackers, however, abuse this code and provide script and HTML as the name. If you typed the following URL

https://northwindtraders.com/welcome.aspx?name=\<script>alert('hi!');

</script>

you'd get a Web page that displays a dialog box, saying "hi!" "So what?" you say. Imagine that the attacker convinces a user to click on a link like this, but the querystring contains some really nasty script and HTML to get your cookie and post it to a site that the attacker owns; the attacker now has your private cookie information or worse.

Here is a way of avoiding this:

Regex r = new Regex(@"^[\w]{1,40}$");

       if
(r.Match(strName).Success) {

    // Cool! The string is ok

} else {

    // Not cool! Invalid string

}

 

This code uses a regular expression to verify that a string contains between 1 and 40 alphanumeric characters and nothing else. This is the only safe way to determine whether a value is correct.

The second defense is to HTML-encode all input when it is used as output. This will reduce dangerous HTML tags to more secure escape characters. You can escape any strings that might be a problem in ASP.NET with HttpServerUtility.HtmlEncode, or in ASP with Server.HTMLEncode.

4. Don't Require sa Permissions

The last kind of input trust attack we want to discuss is SQL injection. Many developers write code that takes input and uses that input to build SQL queries to communicate with a back-end data store, such as Microsoft® SQL Server™ or Oracle.

Take a look at the following code snippet:

void DoQuery(string Id) {

    SqlConnection sql=new
SqlConnection(@"data source=localhost;" +

            "user id=sa;password=password;");

    sql.Open();

    sqlstring= "SELECT hasshipped" +

            " FROM shipping WHERE
id='" + Id + "'";

    SqlCommand cmd = new
SqlCommand(sqlstring,sql);

 

This code is seriously flawed for three reasons:

-         the connection is made from the Web Service to SQL Server as the system administrator account, sa.

-         the clever use of "password" as the password for the sa account

-         the string concatenation that builds the SQL statement. If a user enters an ID of 1001, then you get the following SQL statement, which is perfectly valid and well formed.

SELECT
hasshipped FROM shipping WHERE id = '1001'

However, attackers are more creative than this. They would enter an ID of "'1001' DROP table shipping --", which would execute the following query:

SELECT
hasshipped FROM

shipping
WHERE id = '1001' 

DROP
table shipping -- ';

5. Watch that Crypto Code!

The most common mistake is homegrown encryption code, which is typically quite fragile and easy to break. Never create your own encryption code; you won't get it right. Don't think that just because you've created your own cryptographic algorithm people won't figure it out. Attackers have access to debuggers, and they have both the time and the knowledge to determine
exactly how these systems work—and often break them in a matter of hours. Rather, you should use the CryptoAPI for Win32® applications, and the System.Security.Cryptography namespace has a wealth of well-written and well-tested cryptographic algorithms.

 

6. Reduce Your Attack Profile

If a feature is not required by 90 percent of clients, then it should not be installed by default. Where services that you don't use are running, you don't pay attention to them and they can be exploited. If the feature is installed by default, then it should operate under the principle of least privilege. In other words, do not require the app to run with administrative rights if they
are not required.

7. Employ the Principle of Least Privilege

The operating system and the common language runtime (CLR) have a security policy for several reasons. The security policy is there to put walls around code so that intentional or (just as frequently) unintentional actions by users don't wreak havoc on the network. For instance, an attachment downloaded via e-mail and executed on Alice's machine is restricted to only accessing resources that Alice can access. If the attachment contains a Trojan horse, a good security policy will limit the damage it can do. When you design, build, and deploy server applications, you cannot assume that every request will come from a legitimate user. If a bad guy manages to send you a malformed request that causes your code to behave badly, you want every possible wall around your application to limit the damage. The principle of least privilege says that any given privilege should be granted to the least amount of code necessary, for the least amount of time necessary. In other words, at any given time, try to erect as many walls around your code as possible.

8. Pay Attention to Failure Modes

There are so many ways a piece of code can fail; it's just depressing thinking about it. Sadly, this is not a safe frame of mind. We need to pay much closer attention to failure modes in code. These bits of code are often written with little attention to detail and often go completely untested. Untested code often leads to security vulnerabilities. There are three things you can do to help alleviate this problem:

-         pay just as much attention to those little error handlers as you do your normal code. Think about the state of the system when your error-handling code is executing. Are you leaving the system in a valid and secure state?

-         once you write a function, step your debugger through it several times, ensuring that you hit every error handler. Note that even this technique may not uncover subtle timing errors. You may need to pass bad arguments to your function or adjust the state of the system in some way that causes your error handlers to execute

-         make sure your test suites force your functions to fail. Try to have test suites that exercise every line of code in your function. These can help you discover regression, especially if you automate your tests and run them after every build.

Be sure that if your code fails, it leaves the system in the most secure state possible.

 

9. Impersonation is Fragile

Impersonation allows each thread in a process to run in a distinct security context, typically the client's security context. This is all well and good for simple gateways like the file system redirector. However, impersonation is often used in other, more complex applications. Take a Web application for instance. If you're writing a classic unmanaged ASP application, ISAPI extension, or an ASP.NET application which specifies

<identity impersonate='true'>

in its Web.config file, you are running in an environment with two different security contexts: you have a process token and a thread token, and generally speaking, the thread token will be used for access checks. Say you are writing an ISAPI application that runs inside the Web server process. Your thread token is likely IUSR_MACHINE, given that most requests are unauthenticated. But your process token is SYSTEM! Say your code is compromised by a bad guy via a buffer overflow exploit. Do you think the bad guy will be content with running as IUSR_MACHINE? No way. It's very likely that his attack code will call RevertToSelf to remove the impersonation token, hoping to elevate his privilege level. In this case, he'll succeed quite nicely. Another thing he can do is call CreateProcess. The token for that new process will be copied not from the impersonation token, but from the process token, so the new process runs as SYSTEM.

 

10. Write Apps that Non-admins Can Actually Use

This really is a corollary of the principal of least privilege. Quit running as an administrator yourself. Check out Keith's Web site for more info on how developers can easily run as non-admins at https://www.develop.com/kbrown

 

 

Security Vulnerability

A security vulnerability is a weakness in a product that makes it impossible to prevent an attacker's malicious activities, even when the product is used correctly. Here are some possible malicious activities:

  • Obtaining permissions on a computer that are higher than those of the user.
  • Taking over the operation of a user's computer.
  • Compromising data on a user's computer.

Note:
Never assume that your application will be run only in specific environments. Your application might be used in settings that you did not expect, especially when the application becomes popular. Assume instead that your code will run in hostile environments, and design, write, and test your code accordingly.

A secure product helps protect the following:

  • Confidentiality, integrity, and availability of a customer's information.
  • Integrity and availability of the processing resources under the control of the system owner or administrator.

Comments

  • Anonymous
    July 25, 2012
    informative.!!