次の方法で共有


Tips for improving application robustness and minimizing upgrade upheaval

In general we recommend against configuring a deployed application to use a new version of SQL Native Client (SNAC) unless the application developer has certified that the application has been tested with the version of SNAC in question. This is because we don't know how well applications will behave when they encounter differences between a new version of SNAC and earlier versions of SNAC or MDAC/WDAC. In this post we'll look at the types of change we make between releases and how to design an application to anticipate then. Briefly, the types of change that could cause problems can be categorized as:

  • New functionality

  • New data types

  • Error handling

Let's look at each of these in turn.

New functionality

This is an area where there should be no impact unless an application opts-in to use specific new features, so the application developer has complete control and the scope for existing code mis-behaving on upgrade is very small.

New data types

This is the area where most problems occur. With new releases of SQL Server we often add new data types. New types should have no impact on deployed applications when the server is upgraded but the client application isn't. Firstly, for application to be exposed to the new type at all schema changes would need to be made. Secondly, if a schema change is made, applications already deployed will 'see' new types as an existing type, often nvarchar. This is because SQL Server knows what types the client understands from the TDS protocol version the client API uses and will convert new types to pre-existing types when it sends data to clients.

We generally make it possible for an application using an older version of SNAC to detect the actual type (via a 'local type name' property) in case it needs to be aware of the new type but is not able to upgrade to the newer version of SNAC that exposes the type. This means that applications should base their logic on API type codes (eg SQL_VARCHAR in ODBC) rather than local type names (eg 'xml' or 'datetimeoffset') wherever possible. In practice this is the simplest way to code applications, so is no burden.

Problems can arise when a deployed application is configured to use a later version of SNAC than the one it was developed and tested with and when it encounters types introduced since its deployment. The way to avoid this type of problem is to treat types that an application doesn't recognise as nvarchar. ODBC and OLE DB provide conversions to and from nvarchar for all data types. Applications should avoid code such as

switch (dataType) {

case SQL_INTEGER://deal with integer

break;

case SQL_WVARCHAR://deal with nvachar

break;

}

where a new type can introduce undefined behavior. Instead, add code to deal with unrecognised types:

switch (dataType) {

case SQL_INTEGER://deal with integer

break;

case SQL_WVARCHAR://deal with nvachar

break;

default://either (a) add clean error handling code or

    // (b) process the type as SQL_WVARCHAR. SNAC

    // convert parameter and result values to SQL_C_WVARCHAR 

}

One other pitfall is handling types with unspecified size, such as xml and varchar(max). SNAC returns the size of these types as SQL_SS_LENGTH_UNLIMITED in ODBC. This has the value 0 and can cause unpredictable behavior if an application uses it in arithmetic expressions for buffer allocation, for example. ODBC applications should treat a column or parameter size of 0 as 'huge' rather than 'small' . OLE DB returns the size of these types as ~0 (ie all bits set). Both ODBC and OLE DB in SNAC conform to the API specifications in using these values, so this isn't behavior we invented for SNAC alone, though their use isn't common in other products.

Finally, sometimes we strengthen our conformance to API specifications, and this can catch non-conforming applications. For example, in SNAC10 validation of the scale of SQL_TYPE_TIMESTAMP/DBTYPE_DBTIMESTAMP is more strict than in earlier versions. This is because in SQL Server 2008 datetime2 can have a scale (fractional seconds digits) of 0 to 7, where in earlier versions of SQL Server it could only be 0 or 3. It turns out that OLE DB didn't validate the bScale parameter of ICommandWithParameters::SetParameterInfoin MDAC and SNAC9, though it should have. Applications should always adhere to API specifications, even when a particular driver or provider's validation falls short.

Error handling

Newer releases of SNAC will often include changes in error messages. This is one way in which try to make it easier for you to diagnose and resolve issues without having to make a support call. Sometimes we improve the clarity of a message, sometimes we will add hints in the message to suggest the most common causes or cures for particular errors. Therefore, applications should avoid dependencies on error message text and should use error codes (such as SQLSTATE in ODBC) or numbers instead. If you do this you're also making it easier for your application to be used with localized versions of SNAC.

Occasionally when we optimize SNAC in a new release we change the order in which we validate parameters or split or merge code paths. This can mean that when an application tries to do something invalid the error reported changes between releases or an error is reported on a different API call. Generally we aim for earlier error detection so an error that is caught at execute time in one version might be caught at bind time in a leter version. It's also possible that if an API call has multiple errors the error we detect first may change. In almost all cases changes in this area should have no impact on code already debugged and tested with an earlier version of SNAC since the changes invariably mean there is an error in the application logic. Note that errors associated with runtime conditions such as connection failures or duplicate key errors are much less likely to change.

Comments