Security Question List: Managed Code (.NET Framework 2.0)
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |
patterns & practices Developer Center
J.D. Meier, Alex Mackman, Blaine Wastell, Prashant Bansode, Jason Taylor, Rudolph Araujo
Microsoft Corporation
October 2005
Applies To
- Managed Code (.NET Framework 2.0)
Summary
Use the questions in this module to help you perform security code reviews on managed code (.NET Framework 2.0) applications. Use this question list in conjunction with the module, "How To: Perform a Security Code Review for Managed Code (.NET Framework 2.0)."
Contents
How to Use This Module
What's New in 2.0
SQL Injection
Cross-Site Scripting
Input/Data Validation
Code Access Security
Exception Management
Impersonation
Sensitive Data
Cryptography
Unsafe Code
Potentially Dangerous Unmanaged APIs
Auditing and Logging
Multi-threadingAdditional Resources
How To Use This Module
Use this module to conduct an effective code review for security. Each question category includes a table that matches vulnerabilities to implications, and a set of questions that you can use to determine if your application is susceptible to the listed vulnerabilities. A reference that matches vulnerabilities to questions can be found in the "Vulnerability/Question Matrix" section.
When you use this module, keep the following in mind:
- How to perform security code review for managed code (.NET Framework 2.0) questions. Use the companion module, "How To: Perform a Security Code Review for Managed Code (.NET Framework 2.0)," to help you understand the code review process.
- How to use the question list. Use this question list as a starting point for "Step 3. Review Code for Security Issues," in "How To: Perform a Security Code Review for Managed Code (.NET Framework 2.0)."
- Prioritize questions for review. You may not need to answer all of the questions because some may not be relevant to your application.
What's New in 2.0
This section describes the most important changes in .NET Framework 2.0 that you should be aware of when you perform a security code review. The main changes include:
- Security Exception. The SecurityException object has been enhanced to provide more information in the case of a failed permission.
- DPAPI managed wrapper. .NET Framework 2.0 provides a set of managed classes to access the Win32 Data Protection API (DPAPI). This makes it easier to secure sensitive data in memory when you write managed code. You no longer need to use P/Invoke. Code requires the new DataProtectionPermission to be able to use DPAPI.
- XML Encryption. The EncryptedXML class can be used to secure sensitive data, such as database connection strings, that must be stored on disk.
- SecureString. This new type uses DPAPI to ensure secrets stored in string form are not exposed to memory or disk sniffing attacks.
Use the following questions to make sure that the code uses the new .NET Framework 2.0 features properly:
- Does the code take advantage of the improvements to SecurityException?
- Does the code use DPAPI to protect sensitive data in memory?
- Does the code use EncryptedXML to store sensitive data on disk?
- Does the code ensure that SecureStrings are not passed unnecessarily as regular strings?
Does the code take advantage of the improvements to SecurityException?
If your application is running in a partial trust environment, use the SecurityException object to gracefully handle permission request failures. Table 1 shows the properties on this object that make debugging security issues easier.
Table 1. SecurityException Object Properties
Name | Type | Description |
---|---|---|
Action | SecurityAction | The SecurityAction that failed the security check. |
Demanded | Object | The permission, permission set, or permission sets that were demanded and triggered the exception. |
DenySetInstance | Object | If a Deny stack frame caused the security exception to fail, then this property will contain that set; otherwise it will be null. |
FailedAssemblyInfo | AssemblyName | AssemblyName of the assembly that caused the security check to fail. |
FirstPermissionThatFailed | IPermission | The first permission in failing PermissionSet (or PermissionSetCollection) that did not pass the security check. |
Method | MethodInfo | The method that the failed assembly was in when it encountered the security check that triggered the exception. If a PermitOnly or Deny stack frame failed, this will contain the method that put the PermitOnly or Deny frame on the stack. |
PermitOnlySetInstance | Object | If the stack frame that caused the security exception had a PermitOnly permission set, this property will contain it, otherwise it will be null. |
Url | String | URL of the assembly that failed the security check. |
Zone | SecurityZone | Zone of the assembly that failed the security check |
Does the code use DPAPI to protect sensitive data in memory?
If your application is manipulating sensitive data, it should use DPAPI to store the data in encrypted form. Encrypting sensitive data until it is used reduces the chances that it will be stolen out of memory. DPAPI is now accessible through the System.Security.Cryptography.ProtectedMemory class.
Does the code use EncryptedXML to store sensitive data on disk?
If your application stores sensitive information, such as database connection strings, on disk in XML format, it should use the System.Security.Cryptography.XML.EncryptedXML class to protect the information. Encrypting sensitive information when it is stored on disk reduces the chances that it will be stolen.
Does the code ensure that SecureStrings are not passed unnecessarily as regular strings?
SecureString protects sensitive data only if the data is never stored as a string. Avoid storing sensitive data as a string if possible because a string is immutable (read-only). Each time you change the value of a string, the original value of the string is kept in memory along with the new value. If sensitive data is stored in memory as a string and then is encrypted, the original string in memory could still be stolen. The best way to make sure that a string in memory cannot be stolen is to make sure it is only stored with SecureString.
SQL Injection
Your code is vulnerable to SQL injection attacks wherever it uses input parameters to construct SQL statements. A SQL injection attack occurs when untrusted input can modify the logic of a SQL query in unexpected ways. As you review the code, make sure that any input that is used in a SQL query is validated or that the SQL queries are parameterized. Table 2 summarizes the SQL injection vulnerability and its implications.
Table 2: SQL Injection Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Non-validated input used to generate SQL queries | SQL injections can result in unauthorized access, modification, or destruction of SQL data. |
The following questions can help you to identify vulnerable areas:
- Is the application susceptible to SQL injection?
- Does the code use parameterized stored procedures?
- Does the code attempt to filter input?
Is the application susceptible to SQL injection?
Pay close attention to your data access code. Scan for the strings "SqlCommand", "OleDbCommand", or "OdbcCommand" to help identify data access code. Identify any input field that you use to form a SQL database query. Check that these fields are suitably validated for type, format, length, and range.
Does the code use parameterized stored procedures?
Stored procedures alone cannot prevent SQL injection attacks. Check that your code uses parameterized stored procedures. Check that your code uses typed parameter objects such as SqlParameter, OleDbParameter, or OdbcParameter. The following example shows the use of a SqlParameter.
SqlDataAdapter myCommand = new SqlDataAdapter("spLogin", conn);
myCommand.SelectCommand.CommandType = CommandType.StoredProcedure;
SqlParameter parm = myCommand.SelectCommand.Parameters.Add(
"@userName", SqlDbType.VarChar,12);
parm.Value=txtUid.Text;
The typed SQL parameter checks the type and length of the input, and ensures that the userName input value is treated as a literal value and not as executable code in the database.
Does the code use parameters in SQL statements?
If the code does not use stored procedures, make sure that it uses parameters in the SQL statements it constructs, as shown in the following example.
select status from Users where UserName=@userName
Check that the code does not use the following approach, where the input is used directly to construct the executable SQL statement by using string concatenation.
string sql = "select status from Users where UserName='"
+ txtUserName.Text + "'";
Does the code attempt to filter input?
A common approach is to develop filter routines to add escape characters to characters that have special meaning to SQL. This is an unsafe approach, and developers should not rely on it because of character representation issues.
Cross-Site Scripting
Code is vulnerable to cross-site scripting attacks wherever it uses input parameters in the output HTML stream returned to the client. Even before you conduct a code review, you can run a simple test to determine if your application is vulnerable. Search for pages where user input information is sent back to the browser.
To perform this test, type text, such as XYZ, in form fields and test the output. If the browser displays XYZ, or if you see XYZ when you view the HTML source, then your Web application is vulnerable to cross-site scripting. If you want to perform a more dynamic test, inject <script>alert('hello');</script>. This technique might not work in all cases because it depends on how the input is used to generate the output.
Table 3 summarizes cross-site scripting vulnerability and its implications.
Table 3: Cross-Site Scripting Vulnerability and Implications
Vulnerability | Implications |
---|---|
Unvalidated and untrusted input in the HTML output stream | Cross-site scripting can allow an attacker to execute a malicious script or steal a user's session and/or cookies. |
The following questions can help you to identify vulnerable areas:
- Does the code echo user input or URL parameters back to a Web page?
- Does the code persist user input or URL parameters to a data store that could later be displayed on a Web page?
Does the code echo user input or URL parameters back to a Web page?
If you include user input or URL parameters in the HTML output stream, you might be vulnerable to cross-site scripting. Make sure that the code validates input and that it uses HtmlEncode or UrlEncode to validate output. Even if a malicious user cannot use your application's UI to access URL parameters, the attacker still may be able to tamper with them.
Reflective cross-site scripting is less dangerous than persistent cross-site scripting due to its transitory nature.
The application should not contain code similar to the following example.
Response.Write( Request.Form["name"] );
Instead, the application should contain code similar to the following.
Response.Write( HttpUtility.HtmlEncode( Request.Form["name"] ) );
Does the code persist user input or URL parameters to a data store that could later be displayed on a Web page?
If the code uses data binding or explicit database access to put user input or URL parameters in a persistent data store and then later includes this data in the HTML output stream, the application could be vulnerable to cross-site scripting. Check that the application uses HtmlEncode or UrlEncode to validate input and encode output. Pay particular attention to areas of the application that permit users to modify configuration or personalization settings. Also pay attention to persistent free-form user input, such as message boards, forums, discussions, and Web postings. Even if an attacker cannot use the application's UI to access URL parameters, a malicious user might still be able to tamper with them.
Persistent cross-site scripting is more dangerous than reflective cross-site scripting.
Input/Data Validation
If you make unfounded assumptions about the type, length, format, or range of input, your application is unlikely to be robust. Input validation can become a security issue if an attacker discovers that you have made unfounded assumptions. The attacker can then supply carefully crafted input that compromises your application. Table 4 shows a set of common input and/or data validation vulnerabilities and their implications.
Table 4: Input/Data Validation Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Unvalidated and untrusted input in the HTML output stream | The application is susceptible to cross-site scripting attacks. |
Unvalidated input used to generate SQL queries | The application is susceptible to SQL injection attacks. |
Reliance on client-side validation | Client validation is easily bypassed. |
Use of input file names, URLs, or user names for security decisions | The application is susceptible to canonicalization issues, which can lead to security flaws. |
Application-only filters for malicious input | This is almost impossible to do correctly because of the enormous range of potentially malicious input. The application should constrain, reject, and sanitize input. |
Use the following questions when you review the code's input and data validation:
- Does the code validate data from all sources?
- Does the code use a centralized approach to input and data validation?
- Does the code rely on client-side validation?
- Is the code susceptible to canonicalization attacks?
- Is the code susceptible to SQL injection?
- Is the code susceptible to cross-site scripting?
Does the code validate data from all sources?
Check that your code makes no assumptions about the validity of input data. It should assume all data is malicious. Web applications should validate data from all sources, including form fields, query strings, cookies, and HTTP headers.
Does the code use a centralize approach to input and data validation?
For common types of input fields, examine whether or not your code uses common validation and filtering libraries to make sure that validation rules are performed consistently and that it has a single point of maintenance.
Does the code rely on client-side validation?
Client-side validation can reduce the number of round trips to the server, but do not rely on it for security because it is easy to bypass. Validate all input at the server.
It is easy to modify the behavior of the client or just write a new client that does not observe the same data validation rules. Consider the following example.
<html>
<head>
<script language='javascript'>
function validateAndSubmit(form)
{
if(form.elments["path"].value.length() > 0)
{
form.submit();
}
}
</script>
</head>
<body>
<form action="Default.aspx" method="post">
<input type=text id=path/>
<input type=button onclick="javascript:validateAndSubmit(this.parent)" Value="Submit" />
</form>
</body>
</html>
In this example, client-side scripting validates that the length of the "path" is greater than zero. If the server processing of this value relies on this assumption to mitigate a security threat, then the attacker can easily break the system.
Is the code susceptible to canonicalization attacks?
Canonicalization errors occur whenever there are multiple ways to represent a resource, and the different representations result in varying security logic being run. There are a several resource types for which this problem can occur, including:
- File resources
- Use of partial paths might result in a file other than what you expect being loaded.
- Use of the PATH environment variable might give control of the paths that your application uses to an attacker.
- URLs
- Alternate representation of an IP address, such as dotless IP, might result in a URL other than what you expected being loaded.
- Encoded characters, such as %20 for space, might result in a URL other than what you expected being loaded.
The result of this issue is that an attacker gains access to a resource to which they would not otherwise have access. As you review the code, look carefully at areas where resources are accessed based upon user input. Make sure that file names are canonicalized before they are used with Path.GetFullPath. Make sure that URLs are canonicalized before they are used with Uri.AbsoluteUri.
Consider using code access security for an extra layer of protection. Refuse permissions that are not needed, and indicate to the runtime which permissions your code needs, as shown in the following example.
[assembly:FileIOPermission( SecurityAction.RequestMinimum, Read = "c:\\temp" )]
[assembly:FileDialogPermission( SecurityAction.RequestOptional )]
[assembly:FileIOPermission( SecurityAction.Deny, Write = "c:\\windows" )]
Is the code susceptible to SQL injection?
Your code is vulnerable to SQL injection attacks if it uses input parameters to construct SQL statements. A SQL injection attack occurs when untrusted input can modify the semantics of a SQL query in unexpected ways. As you are reviewing the code, make sure that the SQL queries are parameterized and that any input used in a SQL query is validated.
Consider the following SQL query code example.
query = "SELECT * FROM USERS WHERE USER_ID = '" + userIdFromWebPage + "'";
userIdFromWebPage is a variable that contains untrusted data that has not been validated. Imagine that it contains one of the following:
- "' or 1=1 -"
- "' ;DROP TABLE users -"
- "' ;exec xp_cmdshell(''format c:') -"
The final query could look like this.
"select * FROM USERS WHERE USER_ID = '' ;exec xp_cmdshell('format c:') -"
This results in a format of the c:\ drive on the database server.
The code should use strongly typed parameters and look like this:
SqlCommand queryCMD = new SqlCommand("GetUser", sqlConn);
queryCMD.CommandType = CommandType.StoredProcedure;
SqlParameter myParm = queryCMD.Parameters.Add("@UserID", SqlDbType.Int, 4);
myParm.Value = userIdFromWebPage;
SqlDataReader myReader = queryCMD.ExecuteReader();
Is the code susceptible to cross-site scripting?
Cross-site scripting occurs when an attacker manages to input script code into an application so that it is echoed back and executed in the security context of the application. This allows an attacker to steal user information, including forms data and cookies. This vulnerability could be present whenever a Web application echoes unfiltered user input back out to Web content.
As you review the code, make sure that untrusted data whose ultimate output is Web page content does not contain HTML tags. The data could move from untrusted input to Web page output by using a roundabout path—for example, put in a database, retrieved from the database, and then displayed on a Web page. To protect against this security issue, make sure that HTMLEncode or URLEncode are used before user input is echoed back to Web content.
Code Access Security
Code access security is a resource constraint model designed to restrict the types of system resources that the code can access and the types of privileged operations that the code can perform. These restrictions are independent of the user who calls the code or the user account under which the code runs. If the code you are reviewing operates in partially trusted environments and uses explicit code access security techniques, review it carefully to make sure that code access security is used appropriately. Table 5 shows possible vulnerabilities that occur with improper use of code access security.
Table 5: Code Access Security Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Improper use of link demands or asserts | The code is susceptible to luring attacks. |
Code allows untrusted callers | Malicious code can use the code to perform sensitive operations and access resources. |
If the code uses explicit code access security techniques, review it for the following:
- Does the code use link demands or assert calls?
- Does the code use AllowPartiallyTrustedCallersAttribute?
- Does the code use potentially dangerous permissions?
- Does the code give dependencies too much trust?
Does the code use link demands or assert calls?
Look closely at each LinkDemand and Assert call. These can open the code to luring attacks because the code access stack walk will be stopped before it is complete. While their use is sometimes necessary for performance reasons, make sure that there can be no untrusted callers higher in the stack that could use this method's LinkDemand or Assert call as a mechanism for attack.
Does the code use AllowPartiallyTrustedCallersAttribute?
Pay particular attention if the code allows partially trusted callers by including the following attribute.
[assembly: AllowPartiallyTrustedCallersAttribute()]
This allows the assembly to be accessible from calling code that is not fully trusted. If the code you are reviewing then calls into an assembly that does not allow partial trusted callers, a security issue could result.
Does the code use potentially dangerous permissions?
Review the code for requests for potentially dangerous permissions, such as the following: UnmanagedCode, MemberAccess, SerializationFormatter, SkipVerification, ControlEvidence / ControlPolicy, ControlAppDomain, ControlDomainPolicy, and SuppressUnmanagedCodeSecurityAttribute.
The following code example uses SuppressUnmanagedCodeSecurityAttribute. You should flag this as a potentially dangerous practice.
[DllImport("Crypt32.dll", SetLastError=true, CharSet=System.Runtime.InteropServices.CharSet.Auto)]
[SuppressUnmanagedCodeSecurity]
private static extern bool CryptProtectData(
ref DATA_BLOB pDataIn,
String szDataDescr,
ref DATA_BLOB pOptionalEntropy,
IntPtr pvReserved,
ref CRYPTPROTECT_PROMPTSTRUCT pPromptStruct,
int dwFlags,
ref DATA_BLOB pDataOut);
Does the code give dependencies too much trust?
Without explicit safeguards, an attacker can trick the code into loading a malicious library instead of trusted code. Check to see if all of the loaded assemblies are strongly named. This step ensures that tampering cannot occur. Without strong names, your code could unknowingly call malicious code. The use of native code libraries makes this harder to do; therefore, code should avoid trusting native code implicitly. Native libraries can be checked with a hash or a certificate. Additionally you should make sure that all libraries are loaded with a complete path to avoid canonicalization attacks.
Also check whether delay signing is enabled. Delay signing is generally regarded as a good practice because it helps protect the confidentiality of the private key that will be used for signing the component. The following code shows how delay signing is implemented.
[assembly:AssemblyDelaySignAttribute(true)]
Exception Management
Secure exception handling can help prevent certain application-level denial of service attacks and can also prevent system-level information useful to attackers from being returned to the client. For example, without proper exception handling, information such as database schema details, operating system versions, stack traces, file names and path information, SQL query strings, and other information of value to an attacker can be returned to the client. Table 6 shows possible vulnerabilities that can result from poor or missing exception management and the implications of these vulnerabilities.
Table 6: Exception Management Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Failing to use structured exception handling | The application is more susceptible to denial of service attacks and logic flaws, which can expose security vulnerabilities. |
Revealing too much information to the client | An attacker can use this information to help plan and tune subsequent attacks. |
The following questions help you to identify vulnerable areas:
- Is there proper and consistent error checking?
- Do error messages give away too much information?
- Does the application prevent sensitive exception details from being returned to the client?
- Does the application handle errors and exception conditions in the code?
Is there proper and consistent error checking?
Make sure that the application uses try/catch blocks and return value checking consistently. Look for empty catch blocks. Review error handling every time an assembly is loaded dynamically; look for calls to System.Reflection.Assembly.Load. Make sure that if a library contains security functionality and it fails to load that the code defaults to higher security.
Look for places where impersonation or elevated privileges are not lowered when an exception is thrown. This can occur because of a logic issue—the catch block doesn't contain the right code—or because of a subtle misuse of a finally block by an attacker.
Exception filters run before the finally block; therefore, they could result in malicious code executing in the context of the privileged code, rather than in the partially trusted context it should be running in.
The following is an example of bad code logic.
try
{
ElevatePrivilege();
// If ReadSecretFile throws an exception privileges will not be lowered
ReadSecretFile();
LowerPrivilege();
}
catch(FileException fe)
{
ReportException();
}
Do error messages give away too much information?
Error messages should be helpful to the average user without giving away information that an attacker could use to attack the application. Make sure that the code does not give away call stacks, lines of code, server file paths, database names, or anything else internal to the application. This information is not helpful to a user, but can be very helpful to an attacker.
Make sure custom error pages have been implemented in ASP.NET applications to prevent sensitive data from being revealed and to make sure that application tracing has been turned off.
Review security-sensitive error paths carefully. For example, the code should avoid changing error messages for differing error code paths during user authentication. A common problem is to display different error message for invalid user names with invalid passwords and valid user names with invalid passwords. While the difference in errors can be subtle, the result will give attackers information that they can use to compromise the application.
Does the application prevent sensitive exception details from being returned to the client?
Do not reveal too much information to the caller. Exception details can include operating system and .NET Framework version numbers, method names, computer names, SQL command statements, connection strings, and other details useful to an attacker. Log detailed error messages to the event log and return generic error messages to the end user. To do so, make sure that customErrorsMode is set to "On" and that custom error pages are configured for known errors and a default error page with generic information is configured for all unknown errors. A sample configuration file showing appropriate <customErrors> configuration is shown here.
<system.web>
...
<customErrors mode="On" defaultRedirect="DefaultErrorPage.htm">
<error statusCode="401" redirect="KnownError401.htm"/>
<error statusCode="402" redirect="KnownError402.htm"/>
</customErrors>.
...
</system.web>
Also ensure that trace is disabled as follows.
<system.web>
...
<trace enabled="false"/>
...
</system.web>
Does the application handle errors and exception conditions in the code?
Make sure that error conditions are handled correctly and the application does not rely on exceptions instead of condition checking. Ensure that appropriate error conditions are checked, errors are logged, and user-friendly messages are displayed by the application. Here is a code example:
if(SomeConditionFailed)
{
//Error condition, log the error
log("error Condition")
Page.Redirect("Error.htm");
}
//continue with normal execution
...
Make sure that when you call code that could raise exceptions, the calling code uses try/catch blocks as shown below to properly handle any raised exceptions. Also use finally blocks to ensure that any resources in use when the exception occurs are appropriately closed or released.
try
{
// Call code that might throw exceptions. For example registry access, database access or file access etc
}
catch
{
// Log the exception
throw; //Propagate the exception to the caller if appropriate
}
finally
{
// Ensure resources are released
}
Impersonation
If the application uses impersonation, make sure that it is properly implemented. Table 7 lists the impersonation vulnerabilities and their security implications.
Table 7: Impersonation Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Revealing service account credentials to the client | An attacker could use these credentials to attack the server. |
Code executes with higher privileges than expected | An attacker can do more damage when code runs with higher privileges. |
The following questions can help you to identify vulnerable areas:
- Does the application use hard-coded impersonation credentials?
- Does the application clean up properly when it uses impersonation?
Does the application use hard-coded impersonation credentials?
If the code impersonates a service account, it should not pass hard-coded credentials to LogonUser. If the code needs multiple identities to access a range of downstream resources and services, it should use Microsoft Windows Server™ 2003 protocol transition and the WindowsIdentity constructor. This allows the code to create a Windows token that is given only an account's user principal name (UPN). To access a network resource, the code needs delegation. To use delegation, your server needs to be configured as trusted for delegation in Microsoft Active Directory® directory service.
The following code shows how to construct a Windows token using the WindowsIdentity constructor.
using System;
using System.Security.Principal;
public void ConstructToken(string upn, out WindowsPrincipal p)
{
WindowsIdentity id = new WindowsIdentity(upn);
p = new WindowsPrincipal(id);
}
Does the application clean up properly when it uses impersonation?
If the code uses programmatic impersonation, check that it uses structured exception handling and that the impersonation code is inside try blocks. Be sure that catch blocks are used to handle exceptions and that finally blocks are used to ensure that the impersonation is reverted. By using a finally block, the code ensures that the impersonation token is removed from the current thread, whether an exception is generated or not.
The application should not contain code similar to the following:
try
{
ElevatePrivilege();
// if ReadSecretFile throws an exception privileges will not be lowered
ReadSecretFile();
LowerPrivilege()
}
catch(FileException fe)
{
ReportException();
}
Instead, it should contain code similar to the following:
try
{
ElevatePrivilege();
// If ReadSecretFile throws an exception privileges will not be lowered
ReadSecretFile();
}
catch(FileException fe)
{
ReportException();
}
finally
{
LowerPrivilege();
}
Sensitive Data
If the code you are reviewing uses sensitive data, such as connection strings and account credentials, you should make sure that the code protects the data and ensures that it remains private and unaltered. Table 8 shows a set of common sensitive data vulnerabilities and their implications.
Table 8: Sensitive Data Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Storing secrets when you do not need to | This drastically increases the security risk. Do not store secrets unnecessarily. |
Storing secrets in code | If the code is on the server, an attacker may be able to download it. Secrets are visible in binary assemblies. |
Storing secrets in clear text | Anyone who can log on to the server can see secret data. |
Passing sensitive data in clear text over networks | Eavesdroppers can monitor the network to reveal and tamper with the data. |
The following questions help you to identify vulnerable areas:
- Does the code store secrets?
- Is sensitive data stored in predictable locations?
Does the code store secrets?
If an assembly stores secrets, review the design to be sure that it is absolutely necessary to store the secret. If the code must store a secret, review the following questions to make sure that it does so as securely as possible:
Does the application store secrets in code?
Are there secrets or critical intellectual property embedded in the code? Managed code is easy to decompile. It is possible to recover code from the final executable that is very similar to the original code. Any sensitive intellectual property or hard-coded secrets can be stolen with ease. An obfuscator can make this type of theft more difficult, but cannot entirely prevent it. Another common problem is to use hidden form fields thinking that this information will not be visible to the user.
The following is an example of bad code containing hard-coded account credentials:
IntPtr tokenHandle = new IntPtr(0); IntPtr dupeTokenHandle = new IntPtr(0); string userName = "joe", domainName = "acmecorp", password="p@Ssw0rd"; const int LOGON32_PROVIDER_DEFAULT = 0; // This parameter causes LogonUser to create a primary token. const int LOGON32_LOGON_INTERACTIVE = 2; const int SecurityImpersonation = 2; tokenHandle = IntPtr.Zero; dupeTokenHandle = IntPtr.Zero; // Call LogonUser to obtain a handle to an access token. bool returnValue = LogonUser(userName, domainName, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref tokenHandle);
How does the code encrypt secrets?
Verify that the code uses DPAPI to encrypt connection strings and credentials. Do not store secrets in the Local Security Authority (LSA) because the account used to access the LSA requires extended privileges. For information on using DPAPI, see "How To: Create a DPAPI Library" in the "How To" section of Microsoft patterns & practices Volume I, Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication or How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI.
Does the code store secrets in the registry?
If the code stores secrets in HKEY_LOCAL_MACHINE, verify that the secrets are first encrypted and then secured with a restricted ACL. An ACL is not required if the code stores secrets in HKEY_CURRENT_USER because this registry key is automatically restricted to processes running under the associated user account.
Does the code eliminate secrets from memory?
Look for failure to clear secrets from memory after use. Because the common language runtime (CLR) manages memory for you, this is actually harder to do in managed code than it is in native code. To make sure that secrets are adequately cleared, verify that the following steps have been taken:
- Strings should not be used to store secrets; they cannot be changed or effectively cleared. Instead, the code should use a byte array or a CLR 2.0 SecureString.
- Whatever type the code uses, it should call the Clear method as soon as it is finished with the data.
- If a secret is paged to disk, it can persist for long periods of time and can be difficult to completely clear. Make sure that GCHandle.Alloc and GCHandleType.Pinned are used to keep the managed objects from being paged to disk.
Is sensitive data stored in predictable locations?
Sensitive data should be stored and transmitted in encrypted form; anything less invites theft. For example, a common error is to store database server passwords in the ASP.NET Web.config file, as shown in the following example.
<!-- web.config -->
<connectionStrings>
<add name="MySQLServer"
connectionString="Initial Catalog=finance;data
source=localhost;username='Bob' password='pwd';"
providerName="System.Data.SqlClient"/>
</connectionStrings>
Instead, the connection strings should be encrypted with the Aspnet_regiis utility. The command syntax is:
aspnet_regiis -pe "connectionStrings" -app "/MachineDPAPI" -prov "DataProtectionConfigurationProvider"
The Web.config file after encryption should be similar to the following.
<!web.config after encrypting the connection strings section -->
<connectionStrings configProtectionProvider="DataProtectionConfigurationProvider">
<EncryptedData>
<CipherData>
<CipherValue>AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAexuIJ/8oFE+sGTs7jBKZdgQAAAACAAAAAAADZgAAqAAAABAAAAAKms84dyaCPAeaSC1dIMIBAAAAAASAAACgAAAAEAAAAKaVI6aAOFdqhdc6w1Er3HMwAAAAcZ00MZOz1dI7kYRvkMIn/
BmfrvoHNUwz6H9rcxJ6Ow41E3hwHLbh79IUWiiNp0VqFAAAAF2sXCdb3fcKkgnagkHkILqteTXh</CipherValue>
</CipherData>
</EncryptedData>
</connectionStrings>
...
Similarly, the code should not store forms authentication credentials in the Web.config file, as illustrated in the following example.
<authentication mode="Forms">
<forms name="App" loginUrl="/login.aspx">
<credentials passwordFormat = "Clear"
<user name="UserName1" password="Password1"/>
<user name="UserName2" password="Password2"/>
<user name="UserName3" password="Password3"/>
</credentials>
</forms>
</authentication>
Instead, use an external store with well-controlled access such as Active Directory or a SQL Server database.
Cryptography
Review the code to see whether it uses cryptography to provide privacy, non-repudiation, tampering, or authentication. Table 9 shows the vulnerabilities that can be introduced if cryptography is used inappropriately.
Table 9: Cryptography Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Using custom cryptography | This is less secure than the tried and tested platform-provided cryptography. |
Using the wrong algorithm or too small a key size | Newer algorithms increase security. Larger key sizes increase security. |
Failing to secure encryption keys | Encrypted data is only as secure as the encryption key. |
Using the same key for a prolonged period of time | A static key is more likely to be discovered over time. |
The following questions help you to identify vulnerable areas:
- Does the code use custom cryptographic algorithms?
- Does the code use the correct algorithm and an adequate key size?
- How does the code manage and store encryption keys?
- Does the code generate random numbers for cryptographic purposes?
Does the code use custom cryptographic algorithms?
Look for custom cryptographic routines. Make sure that the code uses the System.Security.Cryptography namespace. Cryptography is notoriously tricky to implement correctly. The Windows Crypto APIs are implementation of algorithms derived from years of academic research and study. Some think that a less well-known algorithm is more secure, but this is not true. Cryptographic algorithms are mathematically proven, and those that have received more review are generally more effective. An obscure, untested algorithm does not protect your flawed implementation from a determined attacker.
Does the code use the correct algorithm and an adequate key size?
Review your code to see what algorithms and key sizes it uses. Review the following questions:
Does the code use symmetric encryption?
If so, check that it uses Rijndael (now referred to as Advanced Encryption Standard [AES]) or Triple Data Encryption Standard (3DES) when encrypted data needs to be persisted for long periods of time. Use the weaker (but quicker) RC2 and DES algorithms only to encrypt data that has a short lifespan, such as session data.
Does the code use the largest key sizes possible?
Use the largest key size possible for the algorithm you are using. Larger key sizes make attacks against the key much more difficult, but can degrade performance.
How does the code manage and store encryption keys?
Look for poor management of keys. Flag hard-coded key values: leaving these in the code will help to ensure that cryptography is broken. Make sure that key values are not passed from method to method by-value because this will leave many copies of the secret in memory to be discovered by an attacker.
Does the code generate random numbers for cryptographic purposes?
Look for poor random number generators. You should make sure that the code uses System.Security.Cryptography.RNGCryptoServiceProvider to generate cryptographically secure random numbers. The Random class does not generate truly random numbers that are not repeatable or predictable.
Unsafe Code
Pay particularly close attention to any code compiled with the /unsafe switch. This code does not have all of the protection normal managed code is given. Look for potential buffer overflows, array out of bound errors, integer underflow, and overflow, as well as data truncation errors. Table 10 shows possible vulnerabilities that can be introduced in unsafe code.
Table 10: Unsafe Code Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Buffer overrun in unmanaged code or code marked /unsafe | Allows arbitrary code execution by using the privileges of the running application. |
Integer overflow in unmanaged code or code marked /unsafe | Unexpected calculation results in system instability or allows an attacker to read arbitrary memory. |
Format string problem in unmanaged code or code marked /unsafe | An attacker can read or modify arbitrary memory. |
Array out of bounds in unmanaged code or code marked /unsafe | Failure to check array bounds before access can allow an attacker to read arbitrary memory. |
Data truncation in unmanaged code or code marked /unsafe | Unexpected data truncation can result in system instability or allow an attacker to read arbitrary memory. |
Review unsafe code by using the following questions:
- Is the code susceptible to buffer overruns?
- Is the code susceptible to integer overflows?
- Is the code susceptible to format string problems?
- Is the code susceptible to array out of bound errors?
Is the code susceptible to buffer overruns?
Buffer overruns are a vulnerability that may lead to execution of arbitrary code. While tracing through unmanaged or unsafe code, make sure that the following rules are followed:
- Make sure any functions that copy variable-length data into a buffer and use a maximum length parameter properly.
- Make sure that the code does not rely on another layer or tier for data truncation.
- If you see a problem, make sure the code truncates the data instead of expanding the buffer to fit it. Buffer expansion may just move the problem downstream.
- Make sure any unmanaged code was compiled with the /GS option.
The application should not contain code similar to the following example.
public void ProcessInput()
{
char[] data = new char[255];
GetData(data);
}
public unsafe void GetData(char[] buffer)
{
int ch = 0;
fixed (char* pBuf = buffer)
{
do
{
ch = System.Console.Read();
*(pBuf++) = (char)ch;
} while(ch != '\n');
}
}
In this code example, an overflow occurs whenever a single line is more than 255 characters long. There are two problems in this code:
- The ProcessInput function allocates only enough space for 255 characters.
- The GetData function does not check the size of the array as it fills it.
Is the code susceptible to integer overflows?
This problem occurs if a calculation causes a data value to be larger or smaller than its data type allows. This will cause the value to wrap and become much larger or smaller than expected. As you review the data through unmanaged or unsafe code, make sure that any location where a user can give input that results in a calculation does not cause an underflow or overflow condition.
The application should not contain code similar to the following example.
int[] filter(uint len, int[] numbers)
{
uint newLen = len * 3/4;
int[] buf = new int[newLen];
int j = 0;
for(int i = 0; i < len; i++)
{
if (i % 4 != 0)
buf[j++] = numbers[i];
}
return buf;
}
The problem in this example is that, in calculating the value for len, the code first computes len * 3 and then divides by 4. When len is large enough (about 1.4 billion), len * 3 overflows and newLen is assigned a value that is too small. The result is out of range array access in the buf array.
Is the code susceptible to format string problems?
Format string problems are caused by the way that the printf functions handle variables and by the %n format directive. While you review unmanaged or unsafe code, make sure that format string data never contains user input.
The application should not contain code similar to the following example.
void main (int argc,
char **argv)
{
/* Whatever the user said, spit back! */
printf (argv[1]);
}
In this example, untrusted input in the form of a command line parameter is passed directly to a printf statement. This means that an attacker could include format string % directives in the string, and force the application to return or modify arbitrary memory in the stack.
Is the code susceptible to array out of bound errors?
Array indexing errors, such as buffer overruns, can lead to memory being overwritten at arbitrary locations. This in turn can lead to application instability, or, with a carefully constructed attack, code injection. While you review unmanaged or unsafe code, make sure that the following rules are followed:
- With C/C++ code, make sure that indexes run from zero to n-1, where n is the number of array elements.
- Where possible, make sure that code does not use input parameters as array indices.
- Make sure that any input parameters used as array indices are validated and constrained to ensure that the maximum and minimum array bounds cannot be exceeded.
Potentially Dangerous Unmanaged APIs
In addition to the checks performed for unsafe code, you should review unmanaged code for the use of potentially dangerous APIs such as strcpy and strcat. (A complete list of potentially dangerous APIs is listed in Table 12.) Be sure to review any interop calls, as well as the unmanaged code itself, to make sure that bad assumptions are not made as execution control passes from managed to unmanaged code. Table 11 shows potential vulnerabilities that can arise in unmanaged code.
Table 11: Unmanaged API Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
A potentially dangerous unmanaged API is called improperly | An attacker could exploit the weakness in the potentially dangerous API to gain access to arbitrary memory locations or run arbitrary code. |
Does the code call potentially dangerous unmanaged APIs?
Potentially dangerous unmanaged functions can be categorized as follows:
- Unbound Functions (UF). These functions do not expect an explicit bound parameter for the number of bytes that might be modified for one of their parameters. These are typically the most potentially dangerous functions and should never be used.
- NULL Terminated Functions (NTF). These functions require a NULL terminated string. If they are provided a string without NULL termination, they could overwrite memory. If the code uses NULL terminated functions, make sure that the loop does not have an additional placeholder for NULL; for example, for(i = 0; i <= 512; i++) should be < 512 not <= 512.
- Non-NULL Terminated Functions (NNTF). The output of most string functions is NULL terminated; however, the output of a few is not. These require special treatment to avoid programming defects. If the code uses non-NULL terminated functions, make sure that the loop does have an additional placeholder for NULL.
- Format Functions (FF). Format string functions allow a programmer to format their input and output. If the format is not provided, data can be manipulated and can lead to programming defects.
Table 12 shows a range of potentially dangerous unmanaged APIs and the associated categories into which they fall.
Table 12: Potentially Dangerous Unmanaged APIs
Functions | Category |
---|---|
Strcpy | UF, NTF |
Strcat | UF, NTF |
Strcat | NTF |
Strlen | NTF |
Strncpy | NNTF |
Strncat | NTF |
Strcmp | NTF |
Strcmp | NTF |
Mbcstows | NNTF |
_strdup | NTF |
_strrev | NTF |
Strstr | NTF |
Strstr | NTF |
Sprintf | FF, NTF |
_snprintf | FF, NTF |
Printf | FF, NTF |
Fprintf | FF, NTF |
Gets | UF |
Scanf | FF, NTF |
Fscanf | FF, NTF |
Sscanf | FF, NTF, |
Strcspn | NTF |
MultiByteToWideChar | NNTF |
WideCharToMultiByte | NNTF |
GetShortPathNameW | NTF |
GetLongPathNameW | NTF |
WinExec | NTF |
CreateProcessW | NTF |
GetEnvironmentVariableW | NTF |
SetEnvironmentVariableW | NTF |
SetEnvironmentVariableW | NTF |
ExpandEnvironmentStringsW | NTF |
SearchPathW | NTF |
SearchPathW | NTF |
SearchPathW | NTF |
Lstrcpy | UF, NTF |
Wcscpy | UF, NTF |
_mbscpy | UF, NTF |
StrCpyA | UF, NTF |
StrCpyW | UF, NTF |
lstrcatA | UF, NTF |
lstrcatW | UF, NTF |
Wcscat | UF, NTF |
_mbscat | UF, NTF |
Wcslen | NTF |
_mbslen | NTF |
_mbstrlen | NTF |
lstrlenA | NTF |
lstrlenW | NTF |
Wcsncpy | NNTF |
_mbsncpy | NNTF |
StrCpyN | NNTF |
lstrcpynW | NTF |
lstrcatnA | NTF |
lstrcatnW | NTF |
Wcsncat | NTF |
_mbsncat | NTF |
_mbsnbcat | NTF |
lstrcmpA | NTF |
lstrcmpW | NTF |
StrCmp | NTF |
Wcscmp | NTF |
_mbscmp | NTF |
Strcoll | NTF |
Wcscoll | NTF |
_mbscoll | NTF |
_stricmp | NTF |
lstrcmpiA | NTF |
lstrcmpiW | NTF |
_wcsicmp | NTF |
_mbsicmp | NTF |
StrCmp | NTF |
_stricoll | NTF |
_wcsicoll | NTF |
_mbsicoll | NTF |
StrColl | NTF |
_wcsdup | NTF |
_mbsdup | NTF |
StrDup | NTF |
_wcsrev | NTF |
_mbsrev | NTF |
_strlwr | NTF |
_mbslwr | NTF |
_wcslwr | NTF |
_strupr | NTF |
_mbsupr | NTF |
_wcsupr | NTF |
Wcsstr | NTF |
_mbsstr | NTF |
Strspn | NTF |
Wcsspn | NTF |
_mbsspn | NTF |
Strpbrk | NTF |
Wcspbrk | NTF |
_mbspbrk | NTF |
Wcsxfrm | NTF |
Wcscspn | NTF |
_mbcscpn | NTF |
Swprintf | FF |
wsprintfA | FF |
wsprintfW | FF |
Vsprintf | FF |
Vswprintf | FF |
_snwprintf | FF |
_vsnprintf | FF |
_vsnwprintf | FF |
Vprintf | FF |
Vwprintf | FF |
Vfprintf | FF |
Vwfprintf | FF |
_getws | UF |
Fwscanf | FF |
Wscanf | FF |
Swscanf | FF |
OemToCharA | UF, NTF |
OemToCharW | UF, NTF |
CharToOemA | UF, NTF |
CharToOemW | UF, NTF |
CharUpperA | NTF |
CharUpperW | NTF |
CharUpperBuffW | NTF |
CharLowerA | NTF |
CharLowerW | NTF |
CharLowerBuffW | NTF |
Auditing and Logging
Does the code use auditing and logging effectively? Table 12 shows the vulnerabilities that can be introduced if auditing and logging are not used or if they are used incorrectly.
Table 13: Auditing and Logging Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Lack of logging | It is difficult to detect and repel intrusion attempts. |
Sensitive data revealed in logs | An attacker could use logged credentials to attack the server or could steal other sensitive data from the log. |
Does the application log sensitive data?
Review the code to see if sensitive details are logged. Credentials and sensitive user data should not be logged. Applications might work with information that requires higher privileges to view than the log file does. Exposing sensitive data in a log file makes it more likely that the data will be stolen.
Multi-Threading
Multi-threaded code is prone to subtle timing-related issues or race conditions that can result in security vulnerabilities. To locate multi-threaded code, search source code for the text "Thread" to identify where new Thread objects are created, as shown in the following code example.
Thread t = new Thread(new ThreadStart(someObject.SomeThreadStartMethod));
Table 14 shows potential vulnerabilities that can arise in multi-threaded code.
Table 14: Threading Vulnerabilities and Implications
Vulnerability | Implications |
---|---|
Race conditions | Incorrect logic and application malfunction |
Synchronization issues | Application malfunction |
The following review questions help you to identify potential threading vulnerabilities:
- Is the code subject to race conditions?
- Does the code impersonate?
- Does the code contain static class constructors?
- Does the code synchronize Dispose methods?
Is the code subject to race conditions?
Check for race conditions, especially in static methods and constructors. Consider the following code example.
private static int amtRecvd = 0;
public static int IncrementAmountReceived(int increment)
{
return(amtRecvd += increment);
}
If two threads call this code at the same time, it could result in an incorrect calculation for the amtRecvd value.
Code is particularly vulnerable to race conditions if it caches the results of a security check—for example, in a static or global variable—and then uses the flag to make subsequent security decisions.
Does the code impersonate?
Is the thread that creates a new thread currently impersonating? The new thread always assumes the process-level security context and not the security context of the existing thread.
Does the code contain static class constructors?
Review static class constructors to verify that they are not vulnerable if two or more threads access them simultaneously. If necessary, synchronize the threads to prevent this condition.
Does the code synchronize Dispose methods?
If an object's Dispose method is not synchronized, it is possible for two threads to execute Dispose on the same object. This can cause security issues, particularly if the cleanup code releases unmanaged resource handlers such as file, process, or thread handles.
Vulnerability/Question Matrix
Table 15 associates vulnerabilities to questions. Use this table to develop a set of questions to ask during code review if you are concerned about a specific vulnerability.
Table 15: Vulnerability/Question Matrix
Vulnerability | Questions |
---|---|
SQL Injection | |
Non-validated input used to generate SQL queries | Is the application susceptible to SQL injection? |
Does the code use parameterized stored procedures? | |
Does the code use parameters in SQL statements? | |
Does the code attempt to filter input? | |
Cross-Site Scripting | |
Unvalidated and untrusted input in the HTML output stream | Does the code echo user input or URL parameters back to a Web page? |
Does the code persist user input or URL parameters in a data store that could later be displayed on a Web page? | |
Input / Data Validation | |
Reliance on client-side validation | Does the code rely on client-side validation? |
Use of input file names, URLs, or user names for security decisions | Is the code susceptible to canonicalization attacks? |
Application-only filters for malicious input | Does the code validate data from all sources? |
Does the code centralize its approach? | |
Code Access Security | |
Improper use of link demands or asserts | Does the code use link demands or assert calls? |
Code allows untrusted callers | Does your code use AllowPartiallyTrustedCallers Attribute? |
Does the code use potentially dangerous permissions? | |
Does the code give dependencies too much trust? | |
Exception Management | |
Failing to use structured exception handling | Does the code use proper and consistent error checking? |
Does the application fail securely in the event of exceptions? | |
Revealing too much information to the client | Do error messages give away too much information? |
Impersonation | |
Revealing service account credentials to the client | Does the application use hard coded impersonation credentials? |
Code executes with higher privileges than expected | Does the code clean up properly when it uses impersonation? |
Sensitive Data | |
Storing secrets in code | Does the code store secrets? |
Storing secrets in clear text | Is sensitive data stored in predictable locations? |
Passing sensitive data in clear text over networks | Does the code store secrets? |
Cryptography | |
Using custom cryptography | Did the team develop cryptographic algorithms? |
Using the wrong algorithm or too small a key size | Does the code use the right algorithm with an adequate key size? |
Does the code generate random numbers for cryptographic purposes? | |
Failing to secure encryption keys | How does the code manage and store encryption keys? |
Using the same key for a prolonged period of time | How does the code manage and store encryption keys? |
Unsafe Code | |
Buffer overrun in unmanaged code or code marked /unsafe | Is the code susceptible to buffer overruns? |
Integer overflow in unmanaged code or code marked /unsafe | Is the code susceptible to integer overflows? |
Format string problem in unmanaged code or code marked /unsafe | Is the code susceptible to format string problems? |
Array out of bounds in unmanaged code or code marked /unsafe | Is the code susceptible to array out-of-bound errors? |
Data truncation in unmanaged code or code marked /unsafe | |
Potentially Dangerous Unmanaged APIs | |
A potentially dangerous unmanaged API is called improperly | Does the code call potentially dangerous unmanaged APIs? |
Auditing and Logging | |
Sensitive data revealed in logs | Does the code log sensitive data? |
Multi-Threading | |
Race conditions | Is the code subject to race conditions? |
Synchronization issues | Does the code contain static class constructors? |
Does the code synchronize Dispose methods? |
Additional Resources
- How to: Perform a Security Code Review for Managed Code (.NET Framework 2.0)
- Security Question List: ASP.NET 2.0
- Security Engineering Index
Feedback
Provide feedback by using either a Wiki or e-mail:
- Wiki. Security Guidance Feedback Wiki page: https://channel9.msdn.com/wiki/securityguidancefeedback/
- E-mail. Send e-mail to secguide@microsoft.com.
We are particularly interested in feedback regarding the following:
- Technical issues specific to recommendations
- Usefulness and usability issues
Technical Support
Technical support for the Microsoft products and technologies referenced in this guidance is provided by Microsoft Support Services. For product support information, please visit the Microsoft Support Web site at https://support.microsoft.com.
Community and Newsgroups
Community support is provided in the forums and newsgroups:
- MSDN Newsgroups:https://www.microsoft.com/communities/newsgroups/default.mspx
- ASP.NET Forums: http://forums.asp.net
To get the most benefit, find the newsgroup that corresponds to your technology or problem. For example, if you have a problem with ASP.NET security features, you would use the ASP.NET Security forum.
Contributors and Reviewers
- External Contributors and Reviewers: Akshay Aggarwal; Anil John; Frank Heidt; Jason Schmitt, SPI Dynamics; Keith Brown, Pluralsight; Loren Kornfelder
- Microsoft Product Group: Don Willits, Eric Jarvi, Randy Miller, Stefan Schackow
- Microsoft IT Contributors and Reviewers: Shawn Veney
- Microsoft EEG: Eric Brechner, James Waletzky
- Microsoft patterns & practices Contributors and Reviewers: Carlos Farre, Jonathan Wanagel
- Test team: Larry Brader, Microsoft Corporation; Nadupalli Venkata Surya Sateesh, Sivanthapatham Shanmugasundaram, Infosys Technologies Ltd.
- Edit team: Nelly Delgado, Microsoft Corporation
- Release Management: Sanjeev Garg, Microsoft Corporation
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |