Compartilhar via


2 - Much ADO about Data Access: Using the Data Access Application Block

patterns & practices Developer Center

On this page: Download:
Introduction | What Does the Data Access Application Block Do? | Data Operations Supported by the Data Access Block | How Do I Use the Data Access Block? | Adding the Data Access Application Block to Your Project | Configuring the Block and Referencing the Required Assemblies | Creating Database Instances | The Example Application | Reading Multiple Data Rows - Reading Rows Using a Query with No Parameters, Reading Rows Using an Array of Parameter Values, Reading Rows Using Queries with Named Parameters | Retrieving Data as Objects - About Accessors, Creating and Executing an Accessor, Creating and Using Mappers | Retrieving XML Data | Retrieving Single Scalar Values | Retrieving Data Asynchronously - Preparing for Asynchronous Data Access, Retrieving Row Set Data Asynchronously using BeginXXX and EndXXX Methods Directly, Retrieving Row Set Data Asynchronously using a Task, Retrieving Data as Objects Asynchronously | Updating Data - Executing an Update Query, Working with DataSets, Updating the Database from a DataSet | Managing Connections | Working with Connection-Based Transactions | Working with Distributed Transactions | Extending the Block to Use Other Databases | Summary | More Information

Download code

Download PDF

Download Paperback

Introduction

When did you last write an enterprise-level application where you didn't need to handle data? And when you were handling data there was a good chance it came from some kind of relational database. Working with databases is the single most common task most enterprise applications need to accomplish, so it's no surprise that the Data Access Application Block is the most widely used of all of the Enterprise Library blocks—and no coincidence that we decided to cover it in the first of the application block chapters in this book.

A great many of the millions of Enterprise Library users around the world first cut their teeth on the Data Access block. Why? Because it makes it easy to implement the most commonly used data access operations without needing to write the same repetitive code over and over again, and without having to worry about which database the application will target. As long as there is a Data Access block provider available for your target database, you can use the same code to access the data. You don’t need to worry about the syntax for parameters, the idiosyncrasies of the individual data access methods, or the different data types that are returned.

This means that it's also easy to switch your application to use a different database, without having to rewrite code, recompile, and redeploy. Administrators and operators can change the target database to a different server; and even to a different database (such as moving from Oracle to Microsoft SQL Server or the reverse), without affecting the application code. In the current release, the Data Access Application Block contains providers for SQL Server, and SQL Server Compact Edition. Support for Oracle is deprecated in this release. There are also third-party providers available for the IBM DB2, MySql, Oracle (ODP.NET), PostgreSQL, and SQLite databases. For more information on these, see the Enterprise Library Contrib site.

Dn440726.note(en-us,PandP.60).gifBeth says:
Beth Using the Data Access Application Block shields the developer from the differences in the syntax used by different databases. It also facilitates switching between databases.

What Does the Data Access Application Block Do?

The Data Access Application Block abstracts the actual database you are using, and exposes a series of methods that make it easy to access that database to perform common tasks. It is designed to simplify the task of calling stored procedures, but also provides full support for the use of parameterized SQL statements. As an example of how easy the block is to use, when you want to fill a DataSet you simply create an instance of the appropriate Database class, use it to get an appropriate command instance (such as DbCommand), and pass this to the ExecuteDataSet method of the Database class. You don't need to create a DataAdapter or call the Fill method. The ExecuteDataSet method manages the connection, and carries out all the tasks required to populate your DataSet. In a similar way, the Database class allows you to obtain a DataReader, execute commands directly, and update the database from a DataSet. The block also supports transactions to help you manage multiple operations that can be rolled back if an error occurs.

In addition to the more common approaches familiar to users of ADO.NET, the Data Access block also provides techniques for asynchronous data access for databases that support this feature, and provides the ability to return data as a sequence of objects suitable for client-side querying using techniques such as Language Integrated Query (LINQ). However, the block is not intended to be an Object/Relational Mapping (O/RM) solution. It uses mappings to relate parameters and relational data with the properties of objects, but does not implement an O/RM modeling solution.

Dn440726.note(en-us,PandP.60).gifJana says:
Jana If you want an Object/Relational Mapping solution, you should consider using the ADO.NET Entity Framework.

The major advantage of using the Data Access block, besides the simplicity achieved through the encapsulation of the boilerplate code that you would otherwise need to write, is that it provides a way to create provider-independent applications that can easily be moved to use a different source database type. In most cases, unless your code takes advantage of methods specific to a particular database, the only change required is to update the contents of your configuration file with the appropriate connection string. You don’t have to change the way you specify queries (such as SQL statements or stored procedure names), create and populate parameters, or handle return values. This also means reduced requirements for testing.

Data Operations Supported by the Data Access Block

The following table lists by task the most commonly used methods that the Data Access Application Block exposes to retrieve and update data. Some of the method names will be familiar to those used to using ADO.NET directly.

Task

Methods

Filling a DataSet and updating the database from a DataSet.

ExecuteDataSet. Creates, populates, and returns a DataSet.

LoadDataSet. Populates an existing DataSet.

UpdateDataSet. Updates the database using an existing DataSet.

Reading multiple data rows.

ExecuteReader. Creates and returns a provider-independent DbDataReader instance.

Executing a Command.

ExecuteNonQuery. Executes the command and returns the number of rows affected. Other return values (if any) appear as output parameters.

ExecuteScalar. Executes the command and returns a single value.

Retrieving data as a sequence of objects.

ExecuteSprocAccessor. Returns data selected by a stored procedure as a sequence of objects for client-side querying.

ExecuteSqlStringAccessor. Returns data selected by a SQL statement as a sequence of objects for client-side querying.

Retrieving XML data (SQL Server only).

ExecuteXmlReader. Returns data as a series of XML elements exposed through an XmlReader. Note that this method is specific to the SqlDatabase class (not the underlying Database class).

Creating a Command.

GetStoredProcCommand. Returns a command object suitable for executing a stored procedure.

GetSqlStringCommand. Returns a command object suitable for executing a SQL statement (which may contain parameters).

Working with Command parameters.

AddInParameter. Creates a new input parameter and adds it to the parameter collection of a Command.

AddOutParameter. Creates a new output parameter and adds it to the parameter collection of a command.

AddParameter. Creates a new parameter of the specific type and direction and adds it to the parameter collection of a command.

GetParameterValue. Returns the value of the specified parameter as an Object type.

SetParameterValue. Sets the value of the specified parameter.

Working with transactions.

CreateConnection. Creates and returns a connection for the current database that allows you to initiate and manage a transaction over the connection.

You can see from this table that the Data Access block supports almost all of the common scenarios that you will encounter when working with relational databases. Each data access method also has multiple overloads, designed to simplify usage and integrate—when necessary—with existing data transactions. In general, you should choose the overload you use based on the following guidelines:

  • Overloads that accept an ADO.NET DbCommand object provide the most flexibility and control for each method.
  • Overloads that accept a stored procedure name and a collection of values to be used as parameter values for the stored procedure are convenient when your application calls stored procedures that require parameters.
  • Overloads that accept a CommandType value and a string that represents the command are convenient when your application executes inline SQL statements, or stored procedures that require no parameters.
  • Overloads that accept a transaction allow you to execute the method within an existing transaction.
  • If you use the SqlDatabase type, you can execute several of the common methods asynchronously by using the Begin and End versions of the methods.
  • You can use the Database class to create Accessor instances that execute data access operations both synchronously and asynchronously, and return the results as a series of objects suitable for client-side querying using technologies such as LINQ.

How Do I Use the Data Access Block?

Before you start to use the Data Access block, you must add it to your application. You configure the block to specify the databases you want to work with, and add the relevant assemblies to your project. Then you can create instances of these databases in your code and use them to read and write data.

Adding the Data Access Application Block to Your Project

The first step in using the Data Access block is to add the EnterpriseLibrary.Data NuGet package to your project. This adds all of the required assemblies and adds the required references. If you plan to use a SQL CE database with the block, you should also add the EnterpriseLibrary.Data.SqlCe NuGet package. NuGet also adds references to all of the relevant assemblies in your project.

Configuring the Block and Referencing the Required Assemblies

The next step in using the Data Access block is to configure the databases you want to access. The block makes use of the standard <connectionStrings> section of the App.config, Web.config, or other configuration file to store the individual database connection strings, with the addition of a small Enterprise Library-specific section that defines which of the configured databases is the default. The following is an example configuration file that shows a connection string and the setting that defines the default database.

<?xml version="1.0"?>
<configuration>
  <configSections>
    <section name="dataConfiguration"
      type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration
      .DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data"
      requirePermission="true"/>
  </configSections>
  <dataConfiguration defaultDatabase="ExampleDatabase"/>

  <connectionStrings>
    <add name="ExampleDatabase" connectionString="..."
      providerName="System.Data.SqlClient" />
    <add name="AsyncExampleDatabase" connectionString="..."
      providerName="System.Data.SqlClient" />
    </connectionStrings>
...
</configuration>

Optionally, you can use the Enterprise Library configuration tool to configure these settings.

If you are working with an Oracle database, you can use the Oracle provider included with Enterprise Library and the ADO.NET Oracle provider, which requires you to reference or add the assembly System.Data.OracleClient.dll. However, keep in mind that the OracleClient provider is deprecated in version 4.0 of the .NET Framework, and support for this provider is deprecated in Enterprise Library 6. Although support for the OracleClient provider is included in Enterprise Library 6, for future development you should consider choosing a different implementation of the Database class that uses a different Oracle driver, such as that available from the Enterprise Library Contrib site.

To make it easier to use the objects in the Data Access block, you can add references to the relevant namespaces, such as Microsoft.Practices.EnterpriseLibrary.Data and Microsoft.Practices.EnterpriseLibrary.Data.Sql to your project.

Creating Database Instances

You can use a variety of techniques to obtain a Database instance for the database you want to access. The section "Instantiating and Using Enterprise Library Objects" in Chapter 1, "Introduction" describes the different approaches you can use. The examples you can download for this chapter use the simplest approach: calling the CreateDefault or Create method of the DatabaseProviderFactory class, as shown here, and storing these instances in application-wide variables so that they can be accessed from anywhere in the code.

// Configure the DatabaseFactory to read its configuration from the .config file
DatabaseProviderFactory factory = new DatabaseProviderFactory();

// Create the default Database object from the factory.
// The actual concrete type is determined by the configuration settings.
Database defaultDB = factory.CreateDefault();

// Create a Database object from the factory using the connection string name.
Database namedDB = factory.Create("ExampleDatabase");

The following snippet shows the configuration settings in the app.config file in the project.

<?xml version="1.0"?>
<configuration>
  <configSections>
    <section name="dataConfiguration" type=".../>
  </configSections>
  <dataConfiguration defaultDatabase="ExampleDatabase"/>
  <connectionStrings>
    <add name="ExampleDatabase" connectionString="..."
      providerName="System.Data.SqlClient" />
    <add name="AsyncExampleDatabase" connectionString="..."
      providerName="System.Data.SqlClient" />
  </connectionStrings>
  ...
</configuration>

The code above shows how you can get an instance of the default database and a named instance (using the name in the connection strings section). Using the default database is a useful approach because you can change which of the databases defined in your configuration is the default simply by editing the configuration file, without requiring recompilation or redeployment of the application.

Dn440726.note(en-us,PandP.60).gifBeth says:
Beth Although you need the dataConfiguration section if you use the CreateDefault method, it doesn’t need to be there if you use the Create method. The Create method reads the connection strings directly.

Notice that the code above references the database instances as instances of the Database base class. This is required for compatibility if you want to be able to change the database type at some later stage. However, it means that you can only use the features available across all of the possible database types (the methods and properties defined in the Database class).

Some features are only available in the concrete types for a specific database. For example, the ExecuteXmlReader method is only available in the SqlDatabase class. If you want to use such features, you must cast the database type you instantiate to the appropriate concrete type. The following code creates an instance of the SqlDatabase class.

Dn440726.note(en-us,PandP.60).gifMarkus says:
Markus You may need to cast the Database type to a specific type such as SqlDatabase to access specific features. You should check that the reference returned is not null before using it.
// Create a SqlDatabase object from configuration using the default database.
SqlDatabase sqlServerDB 
  = factory.CreateDefault() as SqlDatabase;

As alternative to using the DatabaseProviderFactory class, you could use the static DatabaseFactory façade to create your Database instances. You must invoke the SetDatabaseProviderFactory method to set the details of the default database from the configuration file.

DatabaseFactory.SetDatabaseProviderFactory(factory, false);
defaultDB = DatabaseFactory.CreateDatabase("ExampleDatabase");
// Uses the default database from the configuration file.
sqlServerDB = DatabaseFactory.CreateDatabase() as SqlDatabase;

In addition to using configuration to define the databases you will use, the Data Access block allows you to create instances of concrete types that inherit from the Database class directly in your code, as shown here. All you need to do is provide a connection string that specifies the appropriate ADO.NET data provider type (such as SqlClient).

// Assume the method GetConnectionString exists in your application and 
// returns a valid connection string.
string myConnectionString = GetConnectionString();
SqlDatabase sqlDatabase = new SqlDatabase(myConnectionString);

The Example Application

Now that you have your new Database object ready to go, we'll show you how you can use it to perform a variety of tasks. You can download an example application (a simple console-based application) that demonstrates all of the scenarios you will see in the remainder of this chapter. You can run this directly from the bin\debug folder, or open the solution named DataAccess in Visual Studio to see all of the code as you run the examples.

Note

The two connection strings for the database we provide with this example are:
Data Source=(localdb)\v11.0;AttachDbFilename=|DataDirectory|\DataAccessExamples.mdf;
Integrated Security=True
Data Source=(localdb)\v11.0;Asynchronous Processing=true;
AttachDbFilename=|DataDirectory|\DataAccessExamples.mdf;Integrated Security=True
If you have configured a different database using the scripts provided with the example, you may find that you get an error when you run this example. It is likely that you have an invalid connection string in your App.config file for your database.
In addition, the final example for this block uses the Distributed Transaction Coordinator (DTC) service. This service may not be set to auto-start on your machine. If you receive an error that the DTC service is not available, open the Services MMC snap-in from your Administrative Tools menu and start the service manually; then run the example again.

Reading Multiple Data Rows

One of the most common operations when working with a database is reading multiple rows of data. In a .NET application, you usually access these rows as a DataReader instance, or store them in a DataTable (usually within a DataSet you create). In this section we'll look at the use of the ExecuteReader method that returns a DataReader. You will see how to use a DataSet with the Data Access block methods later in this chapter.

Dn440726.note(en-us,PandP.60).gifBeth says:
Beth The DataSet and DataReader, and related classes are still core elements of ADO.NET. However, there are newer technology options available such as Entity Framework and LINQ to SQL.

Reading Rows Using a Query with No Parameters

Simple queries consisting of an inline SQL statement or a stored procedure, which take no parameters, can be executed using the ExecuteReader method overload that accepts a CommandType value and a SQL statement or stored procedure name as a string.

The following code shows the simplest approach for a stored procedure, where you can also omit the CommandType parameter. The default is CommandType.StoredProcedure (unlike ADO.NET, where the default is CommandType.Text.)

// Call the ExecuteReader method by specifying just the stored procedure name.
using (IDataReader reader = namedDB.ExecuteReader("MyStoredProcName"))
{
  // Use the values in the rows as required.
}

To use an inline SQL statement, you must specify the appropriate CommandType value, as shown here.

// Call the ExecuteReader method by specifying the command type
// as a SQL statement, and passing in the SQL statement.
using (IDataReader reader = namedDB.ExecuteReader(CommandType.Text, 
                            "SELECT TOP 1 * FROM OrderList"))
{
  // Use the values in the rows as required - here we are just displaying them.
  DisplayRowValues(reader);
}

The example named Return rows using a SQL statement with no parameters uses this code to retrieve a DataReader containing the first order in the sample database, and then displays the values in this single row. It uses a simple auxiliary routine that iterates through all the rows and columns, writing the values to the console screen.

void DisplayRowValues(IDataReader reader)
{
  while (reader.Read())
  {
    for (int i = 0; i < reader.FieldCount; i++)
    {
      Console.WriteLine("{0} = {1}", reader.GetName(i), reader[i].ToString());
    }
    Console.WriteLine();
  }
}

The result is a list of the columns and their values in the DataReader, as shown here.

Id = 1
Status = DRAFT
CreatedOn = 01/02/2009 11:12:06
Name = Adjustable Race
LastName = Abbas
FirstName = Syed
ShipStreet = 123 Elm Street
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = Two-day shipping
State = Colorado

Reading Rows Using an Array of Parameter Values

While you may use simple no-parameter stored procedures and SQL statements in some scenarios, it's far more common to use queries that accept input parameters that select rows or specify how the query will execute within the database server. If you use only input parameters, you pass them to the stored procedure or SQL statement directly and let the Database class wrap them in the appropriate DbParameter objects. Note that this means you must pass them in the same order as they are expected by the query, because you are not using names for these parameters—you are only supplying the actual values. The following code shows how you can execute a stored procedure that takes a single string parameter.

// Call the ExecuteReader method with the stored procedure
// name and an Object array containing the parameter values.
using (IDataReader reader = defaultDB.ExecuteReader("ListOrdersByState", 
                                      "Colorado"))
{
  // Use the values in the rows as required - here we are just displaying them.
  DisplayRowValues(reader);
}

The example named Return rows using a stored procedure with parameters uses this code to query the sample database, and generates the following output.

Id = 1
Status = DRAFT
CreatedOn = 01/02/2009 11:12:06
Name = Adjustable Race
LastName = Abbas
FirstName = Syed
ShipStreet = 123 Elm Street
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = Two-day shipping
State = Colorado

Id = 2
Status = DRAFT
CreatedOn = 03/02/2009 01:12:06
Name = All-Purpose Bike Stand
LastName = Abel
FirstName = Catherine
ShipStreet = 321 Cedar Court
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = One-day shipping
State = Colorado

Reading Rows Using Queries with Named Parameters

The technique in the previous example of supplying just an array of parameter values is easy and efficient, but has some limitations. It does not allow you to specify the direction (such as input or output), or the data type—which may be an issue if the data type of a parameter does not exactly match (or cannot be implicitly converted into) the correct type discovered for a stored procedure. If you create an array of parameters for your query, you can specify more details about the types of the parameters and the way they should be used.

In addition, some database systems allocate parameters used in SQL statements or stored procedures simply by position. However, many database systems, such as SQL Server, allow you to use named parameters. The database matches the names of the parameters sent with the command to the names of the parameters defined in the SQL statement or stored procedure. This means that you are not confined to adding parameters to your command in a specific order. However, be aware that if you use named parameters and then change the database type to one that does not support named parameters, any parameters that are supplied out of order will probably cause errors. (This may be difficult to detect if all of the parameters are of the same data type!)

To work with named parameters or parameters of defined types, you must access the Command object that will be used to execute the query, and manipulate its collection or parameters. The Data Access block makes it easy to create and access the Command object by using two methods of the Database class: GetSqlStringCommand and GetStoredProcCommand. These methods return an instance of the appropriate command class for the configured database as a provider-independent DbCommand type reference.

After you create the appropriate type of command, you can use the many variations of the Database methods to manipulate the collection of parameters. You can add parameters with a specific direction using the AddInParameter or AddOutParameter method, or by using the AddParameter method and providing a value for the ParameterDirection parameter. You can change the value of existing parameters already added to the command using the GetParameterValue and SetParameterValue methods.

The following code shows how easy it is to create a command, add an input parameter, and execute both a SQL statement and a stored procedure. Notice how the code specifies the command to which the Database class should add the parameter, the name, the data type, and the value of the new parameter.

// Read data with a SQL statement that accepts one parameter prefixed with @.
string sqlStatement = "SELECT TOP 1 * FROM OrderList WHERE State LIKE @state";

// Create a suitable command type and add the required parameter.
using (DbCommand sqlCmd = defaultDB.GetSqlStringCommand(sqlStatement))
{
  defaultDB.AddInParameter(sqlCmd, "state", DbType.String, "New York");

  // Call the ExecuteReader method with the command.
  using (IDataReader sqlReader = defaultDB.ExecuteReader(sqlCmd))
  {
    DisplayRowValues(sqlReader);
  }
}

// Now read the same data with a stored procedure that accepts one parameter.
string storedProcName = "ListOrdersByState";

// Create a suitable command type and add the required parameter.
using (DbCommand sprocCmd = defaultDB.GetStoredProcCommand(storedProcName))
{
  defaultDB.AddInParameter(sprocCmd, "state", DbType.String, "New York");

  // Call the ExecuteReader method with the command.
  using (IDataReader sprocReader = defaultDB.ExecuteReader(sprocCmd))
  {
    DisplayRowValues(sprocReader);
  }
}

The example named Return rows using a SQL statement or stored procedure with named parameters uses the code you see above to execute a SQL statement and a stored procedure against the sample database. The code provides the same parameter value to each, and both queries return the same single row, as shown here.

Id = 4
Status = DRAFT
CreatedOn = 07/02/2009 05:12:06
Name = BB Ball Bearing
LastName = Abel
FirstName = Catherine
ShipStreet = 888 Main Street
ShipCity = New York
ShipZipCode = 54321
ShippingOption = Three-day shipping
State = New York
Dn440726.note(en-us,PandP.60).gifBeth says:
Beth Anytime you pass in a SQL string, you should consider carefully if you need to validate it for a possible SQL injection attack.

Retrieving Data as Objects

Modern programming techniques typically concentrate on data as objects. This approach is useful if you use Data Transfer Objects (DTOs) to pass data around your application layers, implement a data access layer using O/RM techniques, or want to take advantage of new client-side data querying techniques such as LINQ.

Dn440726.note(en-us,PandP.60).gifJana says:
Jana If you need to move data over the network, consider using WCF Data Services.

The Data Access block is not, in itself, an O/RM solution; but it contains features that allow you to extract data using a SQL statement or a stored procedure as the query, and have the data returned to you as a sequence of objects that implements the IEnumerable interface. This allows you to execute queries, or obtain lists or arrays of objects that represent the original data in the database.

About Accessors

The block provides two core classes for performing this kind of query: the SprocAccessor and the SqlStringAccessor. You can create and execute these accessors in one operation using the ExecuteSprocAccessor and ExecuteSqlAccessor methods of the Database class, or create a new accessor directly and then call its Execute method.

Accessors use two other objects to manage the parameters you want to pass into the accessor (and on to the database as it executes the query), and to map the values in the rows returned from the database to the properties of the objects it will return to the client code. Figure 1 shows the overall process.

Figure 1 - Overview of data accessors and the related types

Follow link to expand image

The accessor will attempt to resolve the parameters automatically using a default mapper if you do not specify a parameter mapper. However, this feature is only available for stored procedures executed against SQL Server and Oracle databases. It is not available when using SQL statements, or for other databases and providers, where you must specify a custom parameter mapper that can resolve the parameters.

If you do not specify an output mapper, the block uses a default map builder class that maps the column names of the returned data to properties of the objects it creates. Alternatively, you can create a custom mapping to specify the relationship between columns in the row set and the properties of the objects.

Note

Inferring the details required to create the correct mappings means that the default parameter and output mappers can have an effect on performance. You may prefer to create your own custom mappers and retain a reference to them for reuse when possible to maximize performance of your data access processes when using accessors.

For a full description of the techniques for using accessors, see the Enterprise Library Reference Documentation. This chapter covers only the simplest approach: using the ExecuteSprocAccessor method of the Database class.

Creating and Executing an Accessor

The following code shows how you can use an accessor to execute a stored procedure and then manipulate the sequence of objects that is returned. You must specify the object type that you want the data returned as—in this example it is a simple class named Product that has the three properties: ID, Name, and Description.

The stored procedure takes a single parameter that is a search string, and returns details of all products in the database that contain this string. Therefore, the code calls the ExecuteSprocAccessor method passing the search string as the single parameter. It specifies the Product class as the type of object to return, and passes to the method the name of the stored procedure to execute and the array of parameter values.

// Create and execute a sproc accessor that uses the default
// parameter and output mappings.
var productData = defaultDB.ExecuteSprocAccessor<Product>("GetProductList",
                                                           "%bike%");

// Perform a client-side query on the returned data. Be aware that 
// the orderby and filtering is happening on the client, not in the database.
var results = from productItem in productData
              where productItem.Description != null
              orderby productItem.Name
              select new { productItem.Name, productItem.Description };

// Display the results
foreach (var item in results)
{
  Console.WriteLine("Product Name: {0}", item.Name);
  Console.WriteLine("Description: {0}", item.Description);
  Console.WriteLine();
}

The accessor returns the data as a sequence that, in this example, the code handles using a LINQ query to remove all items where the description is empty, sort the list by name, and then create a new sequence of objects that have just the Name and Description properties. For more information on using LINQ to query sequences, see "LINQ Query Expressions" on MSDN.

Note

Keep in mind that returning sets of data that you manipulate on the client can have an impact on performance. In general, you should attempt to return data in the format required by the client, and minimize client-side data operations.

The example Return data as a sequence of objects using a stored procedure uses the code you see above to query the sample database and process the resulting rows. The output it generates is shown here.

Product Name: All-Purpose Bike Stand
Description: Perfect all-purpose bike stand for working on your bike at home. Quick-adjusting clamps and steel construction.

Product Name: Bike Wash - Dissolver
Description: Washes off the toughest road grime; dissolves grease, environmentally safe. 1-liter bottle.

Product Name: Hitch Rack - 4-Bike
Description: Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

For an example of creating an accessor and then calling the Execute method, see the section "Retrieving Data as Objects Asynchronously" later in this chapter.

Creating and Using Mappers

In some cases, you may need to create a custom parameter mapper to pass your parameters to the query that the accessor will execute. This typically occurs when you need to execute a SQL statement to work with a database system that does not support parameter resolution, or when a default mapping cannot be inferred due to a mismatch in the number or types of the parameters. The parameter mapper class must implement the IParameterMapper interface and contain a method named AssignParameters that takes a reference to the current Command instance and the array of parameters. The method simply needs to add the required parameters to the Command object's Parameters collection.

More often, you will need to create a custom output mapper. To help you do this, the block provides a class called MapBuilder that you can use to create the set of mappings you require between the columns of the data set returned by the query and the properties of the objects you need.

By default, the accessor will expect to generate a simple sequence of a single type of object (in our earlier example, this was a sequence of the Product class). However, you can use an accessor to return a more complex graph of objects if you wish. For example, you might execute a query that returns a series of Order objects and the related OrderLines objects for all of the selected orders. Simple output mapping cannot cope with this scenario, and neither can the MapBuilder class. In this case, you would create a result set mapper by implementing the IResultSetMapper interface. Your custom row set mapper must contain a method named MapSet that receives a reference to an object that implements the IDataReader interface. The method should read all of the data available through the data reader, processes it to create the sequence of objects you require, and return this sequence.

Retrieving XML Data

Some years ago, XML was the coolest new technology that was going to rule the world and change the way we think about data. In some ways, it did, though the emphasis on XML has receded as the relational database model continues to be the basis for most enterprise systems. However, the ability to retrieve data from a relational database as XML is useful in many scenarios, and is supported by the Data Access block.

Dn440726.note(en-us,PandP.60).gifCarlos says:
Carlos XML is particularly useful as standard format for exchanging data between different systems. However, XML data is typically much larger than equivalent binary data or even JSON data.

SQL Server supports a mechanism called SQLXML that allows you to extract data as a series of XML elements, or in a range of XML document formats, by executing specially formatted SQL queries. You can use templates to precisely control the output, and have the server format the data in almost any way you require. For a description of the capabilities of SQLXML, see SQLXML on MSDN.

The Data Access block provides the ExecuteXmlReader method for querying data as XML. It takes a SQL statement that contains the FOR XML statement and executes it against the database, returning the result as an XmlReader. You can iterate through the resulting XML elements or work with them in any of the ways supported by the XML classes in the .NET Framework. However, as SQLXML is limited to SQL Server (the implementations of this type of query differ in other database systems), it is only available when you specifically use the SqlDatabase class (rather than the Database class).

The following code shows how you can obtain a SqlDatabase instance, specify a suitable SQLXML query, and execute it using the ExecuteXmlReader method.

// Create a SqlDatabase object from configuration using the default database.
SqlDatabase sqlServerDB 
  = DatabaseFactory.CreateDatabase() as SqlDatabase;

// Specify a SQL query that returns XML data.
string xmlQuery = "SELECT * FROM OrderList WHERE State = @state FOR XML AUTO";

// Create a suitable command type and add the required parameter
// NB: ExecuteXmlReader is only available for SQL Server databases
using (DbCommand xmlCmd = sqlServerDB.GetSqlStringCommand(xmlQuery))
{
  xmlCmd.Parameters.Add(new SqlParameter("state", "Colorado"));
  using (XmlReader reader = sqlServerDB.ExecuteXmlReader(xmlCmd))
  {
    // Iterate through the elements in the XmlReader
    while (!reader.EOF)
    {
      if (reader.IsStartElement())
      {
        Console.WriteLine(reader.ReadOuterXml());
      }
    }
  }
}
Dn440726.note(en-us,PandP.60).gifMarkus says:
Markus You should check for a null reference before using the sqlServerDB reference.

The code above also shows a simple approach to extracting the XML data from the XmlReader returned from the ExecuteXmlReader method. One point to note is that, by default, the result is an XML fragment, and not a valid XML document. It is, effectively, a sequence of XML elements that represent each row in the results set. Therefore, at minimum, you must wrap the output with a single root element so that it is well-formed. For more information about using an XmlReader, see "Reading XML with the XmlReader" on MSDN.

The example Return data as an XML fragment using a SQL Server XML query uses the code you see above to query a SQL Server database. It returns two XML elements in the default format for a FOR XML AUTO query, with the values of each column in the data set represented as attributes, as shown here.

<OrderList Id="1" Status="DRAFT" CreatedOn="2009-02-01T11:12:06" Name="Adjustable Race" LastName="Abbas" FirstName="Syed" ShipStreet="123 Elm Street" ShipCity="Denver" ShipZipCode="12345" ShippingOption="Two-day shipping" State="Colorado" />
<OrderList Id="2" Status="DRAFT" CreatedOn="2009-02-03T01:12:06" Name="All-Purpose Bike Stand" LastName="Abel" FirstName="Catherine" ShipStreet="321 Cedar Court" ShipCity="Denver" ShipZipCode="12345" ShippingOption="One-day shipping" State="Colorado" />

You might use this approach when you want to populate an XML document, transform the data for display, or persist it in some other form. You might use an XSLT style sheet to transform the data to the required format. For more information on XSLT, see "XSLT Transformations" on MSDN.

Retrieving Single Scalar Values

A common requirement when working with a database is to extract a single scalar value based on a query that selects either a single row or a single value. This is typically the case when using lookup tables or checking for the presence of a specific entity in the database. The Data Access block provides the ExecuteScalar method to handle this requirement. It executes the query you specify, and then returns the value of the first column of the first row of the result set as an Object type.

Dn440726.note(en-us,PandP.60).gifMarkus says:
Markus If you know that you only expect a single piece of data (one row, one column), then the ExecuteScalar method is the easiest way to retrieve it.

The ExecuteScalar method has a set of overloads similar to the ExecuteReader method we used earlier in this chapter. You can specify a CommandType (the default is StoredProcedure) and either a SQL statement or a stored procedure name. You can also pass in an array of Object instances that represent the parameters for the query. Alternatively, you can pass to the method a Command object that contains any parameters you require.

The following code demonstrates passing a Command object to the method to execute both an inline SQL statement and a stored procedure. It obtains a suitable Command instance from the current Database instance using the GetSqlStringCommand and GetStoredProcCommand methods. You can add parameters to the command before calling the ExecuteScalar method if required. However, to demonstrate the way the method works, the code here simply extracts the complete row set. The result is a single Object that you must cast to the appropriate type before displaying or consuming it in your code.

// Create a suitable command type for a SQL statement.
// NB: For efficiency, aim to return only a single value or a single row.
using (DbCommand sqlCmd 
       = defaultDB.GetSqlStringCommand("SELECT [Name] FROM States"))
{
    // Call the ExecuteScalar method of the command.
    Console.WriteLine("Result using a SQL statement: {0}",
                       defaultDB.ExecuteScalar(sqlCmd).ToString());
}

// Create a suitable command type for a stored procedure.
// NB: For efficiency, aim to return only a single value or a single row.
using (DbCommand sprocCmd = defaultDB.GetStoredProcCommand("GetStatesList"))
{
    // Call the ExecuteScalar method of the command.
    Console.WriteLine("Result using a stored procedure: {0}",
                       defaultDB.ExecuteScalar(sprocCmd).ToString());
}

You can see the code listed above running in the example Return a single scalar value from a SQL statement or stored procedure. The somewhat unexciting result it produces is shown here.

Result using a SQL statement: Alabama
Result using a stored procedure: Alabama

Retrieving Data Asynchronously

Having looked at all of the main ways you can extract data using the Data Access block, we'll move on to look at some more exciting scenarios (although many would perhaps fail to consider anything connected with data access exciting...). Databases are generally not renowned for being the fastest of components in an application—in fact many people will tell you that they are major bottleneck in any enterprise application. It's not that they are inefficient, it's usually just that they contain many millions of rows, and the queries you need to execute are relatively complex. Of course, it may just be that the query is badly written and causes poor performance, but that's a different story.

One way that applications can minimize the performance hit from data access is to perform it asynchronously. This means that the application code can continue to execute, and the user interface can remain interactive during the process. Asynchronous data access may not suit every situation, but it can be extremely useful.

Dn440726.note(en-us,PandP.60).gifCarlos says:
Carlos Asynchronous data access can improve the user experience because the UI does not freeze while the application fetches the data. It is also very useful in server applications where you can avoid blocking threads that could handle other requests, thus improving utilization. However, handling asynchronous calls requires more complex code: you must handle notifications when the call completes, and handle errors and timeouts asynchronously. You can now use the Task Parallel Library and the async/awaitkeywords to greatly simplify some of your asynchronous code.

For example, you might be able to perform multiple queries concurrently and combine the results to create the required data set. Or query multiple databases, and only use the data from the one that returned the results first (which is also a kind of failover feature). However, keep in mind that asynchronous data access has an effect on connection and data streaming performance over the wire. Don’t expect a query that returns ten rows to show any improvement using an asynchronous approach—it is more likely to take longer to return the results!

The Data Access block provides asynchronous Begin and End versions of many of the standard data access methods, including ExecuteReader, ExecuteScalar, ExecuteXmlReader, and ExecuteNonQuery. It also provides asynchronous Begin and End versions of the Execute method for accessors that return data as a sequence of objects. You will see both of these techniques here.

Preparing for Asynchronous Data Access

Asynchronous processing in the Data Access block is only available for SQL Server databases. The Database class includes a property named SupportsAsync that you can query to see if the current Database instance does, in fact, support asynchronous operations. The example for this chapter contains a simple check for this.

One other point to note is that asynchronous data access usually involves the use of a callback or continuation that may run on a different thread from the calling code. On a client application some of this code usually needs to run in a UI thread, which will require proper inter-thread communication.

Other points to note about asynchronous data access are the following:

  • You can use the standard .NET methods and classes from the System.Threading namespace, such as wait handles and manual reset events, to manage asynchronous execution of the Data Access block methods. You can also cancel a pending or executing command by calling the Cancel method of the command you used to initiate the operation. For more information, see "Asynchronous Operations" on MSDN.
  • A recommended alternative approach is to use Task objects from the Task Parallel Library to wrap asynchronous operations. This is a more modern approach that simplifies some of the asynchronous code you need to write. For more information, see Task Parallel Library on MSDN.
  • The BeginExecuteReader method does not accept a CommandBehavior parameter. By default, the method will automatically set the CommandBehavior property on the underlying reader to CloseConnection unless you specify a transaction when you call the method. If you do specify a transaction, it does not set the CommandBehavior property.
  • Always ensure you call the appropriate EndExecute method when you use asynchronous data access, even if you do not actually require access to the results, or call the Cancel method on the connection. Failing to do so can cause memory leaks and consume additional system resources.
  • Using asynchronous data access with the Multiple Active Results Set (MARS) feature of ADO.NET may produce unexpected behavior, and should generally be avoided.
Dn440726.note(en-us,PandP.60).gifCarlos says:
Carlos Asynchronous code is notoriously difficult to write, test, and debug for all edge cases, and you should only consider using it where it really can provide a performance benefit. For guidance on performance testing and setting performance goals see patterns & practices "Performance Testing Guidance for Web Applications." Using the Task Parallel Library and Task objects together with the async and await keywords makes it much easier to write asynchronous code.

The following sections illustrate two approaches to retrieving row set data asynchronously. The first uses a traditional approach, the second illustrates a more up to date approach using the Task Parallel Library

Retrieving Row Set Data Asynchronously using BeginXXX and EndXXX Methods Directly

The following code shows how you can perform asynchronous data access to retrieve a row set from a SQL Server database. The code creates a Command instance and adds two parameters, and then calls the BeginExecuteReader method of the Database class to start the process. The code passes to this method a reference to the command to execute (with its parameters already added), a Lambda expression to execute when the data retrieval process completes, and a null value for the AsyncState parameter.

// Create command to execute stored procedure and add parameters.
DbCommand cmd = asyncDB.GetStoredProcCommand("ListOrdersSlowly");
asyncDB.AddInParameter(cmd, "state", DbType.String, "Colorado");
asyncDB.AddInParameter(cmd, "status", DbType.String, "DRAFT");

// Execute the query asynchronously specifying the command and the
// expression to execute when the data access process completes.
asyncDB.BeginExecuteReader(cmd,
  asyncResult =>
  {
    // Lambda expression executed when the data access completes.
    try
    {
      using (IDataReader reader = asyncDB.EndExecuteReader(asyncResult))
      {
        DisplayRowValues(reader);
      }
    }
    catch (Exception ex)
    {
      Console.WriteLine("Error after data access completed: {0}", ex.Message);
    }
  }, null);

The Lambda expression then calls the EndExecuteReader method to obtain the results of the query execution. At this point you can consume the row set in your application or, as the code above does, just display the values. Notice that the callback expression should handle any errors that may occur during the asynchronous operation.

Note

You can also, of course, use the separate callback method approach instead of an inline lambda expression if you wish.

The AsyncState parameter can be used to pass any required state information into the callback. For example, when you use a callback, you would pass a reference to the current Database instance as the AsyncState parameter so that the callback code can call the EndExecuteReader (or other appropriate End method) to obtain the results. When you use a Lambda expression, the current Database instance is available within the expression and, therefore, you do not need to populate the AsyncState parameter.

Dn440726.note(en-us,PandP.60).gifCarlos says:
Carlos In a Windows Forms or WPF application, you should not attempt to update the UI directly from within the Lambda expression: because the Lambda expression is running on a different thread, you should use the Invoke method on a form or the Dispatcher object in WPF.

The example Execute a command that retrieves data asynchronously uses the code shown above to fetch two rows from the database and display the contents. As well as the code above, it uses a simple routine that displays a "Waiting..." message every second as the code executes. The result is shown here.

Database supports asynchronous operations
Waiting... Waiting... Waiting... Waiting... Waiting...

Id = 1
Status = DRAFT
CreatedOn = 01/02/2009 11:12:06
Name = Adjustable Race
LastName = Abbas
FirstName = Syed
ShipStreet = 123 Elm Street
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = Two-day shipping
State = Colorado

Id = 2
Status = DRAFT
CreatedOn = 03/02/2009 01:12:06
Name = All-Purpose Bike Stand
LastName = Abel
FirstName = Catherine
ShipStreet = 321 Cedar Court
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = One-day shipping
State = Colorado

Of course, as we don't have a multi-million-row database handy to query, the example uses a stored procedure that contains a WAIT statement to simulate a long-running data access operation. It also uses ManualResetEvent objects to manage the threads so that you can see the results more clearly. Open the sample in Visual Studio, or view the Program.cs file, to see the way this is done.

Retrieving Row Set Data Asynchronously using a Task

In the current release of Enterprise Library, the asynchronous methods in the Database class do not natively support using Task objects. However, it is possible to use Task objects by using the TaskFactory.FromAsync method that converts a pair of BeginXXX and EndXXX methods to a Task implementation. The following code sample, which executes the same stored procedure as the previous example, illustrates how you can do this. Note how the example uses the FromAsync method to wrap the asynchronous operation in a Task instance and use the async and await keywords to simplify the synchronization.

static void ReadDataAsynchronouslyTask()
{
  if (!SupportsAsync(asyncDB)) return;
    DoReadDataAsynchronouslyTask().Wait();
}

private static async Task DoReadDataAsynchronouslyTask()
{
  try
  {
    // Create command to execute stored procedure and add parameters
    DbCommand cmd = asyncDB.GetStoredProcCommand("ListOrdersSlowly");
    asyncDB.AddInParameter(cmd, "state", DbType.String, "Colorado");
    asyncDB.AddInParameter(cmd, "status", DbType.String, "DRAFT");

    using (var timer = new Timer(_ => Console.Write("Waiting... ")))
    {
      timer.Change(0, 1000);

      using (var reader = await Task<IDataReader>.Factory
        .FromAsync<DbCommand>(asyncDB.BeginExecuteReader, 
        asyncDB.EndExecuteReader, cmd, null))
      {
        timer.Change(Timeout.Infinite, Timeout.Infinite);
        Console.WriteLine();
        Console.WriteLine();
        DisplayRowValues(reader);
      }
    }
  }
  catch (Exception ex)
  {
    Console.WriteLine("Error while starting data access: {0}", ex.Message);
  }
}

You can run this example in the sample application to see that it behaves in the same way as the previous sample.

Retrieving Data as Objects Asynchronously

You can also execute data accessors asynchronously when you want to return your data as a sequence of objects rather than as rows and columns. The example Execute a command that retrieves data as objects asynchronously demonstrates this technique. You can create your accessor and associated mappers in the same way as shown in the previous section of this chapter, and then call the BeginExecute method of the accessor. This works in much the same way as when using the BeginExecuteReader method described in the previous example.

You pass to the BeginExecute method the lambda expression or callback to execute when the asynchronous data access process completes, along with the AsyncState and an array of Object instances that represent the parameters to apply to the stored procedure or SQL statement you are executing. The lambda expression or callback method can obtain a reference to the accessor that was executed from the AsyncState (casting it to an instance of the DataAccessor base type so that the code will work with any accessor implementation), and then call the EndExecute method of the accessor to obtain a reference to the sequence of objects the accessor retrieved from the database.

Updating Data

So far, we've looked at retrieving data from a database using the classes and methods of the Data Access block. Of course, while this is typically the major focus of many applications, you will often need to update data in your database. The Data Access block provides features that support data updates. You can execute update queries (such as INSERT, DELETE, and UPDATE statements) directly against a database using the ExecuteNonQuery method. In addition, you can use the ExecuteDataSet, LoadDataSet, and UpdateDataSet methods to populate a DataSet and push changes to the rows back into the database. We'll look at both of these approaches here.

Executing an Update Query

The Data Access block makes it easy to execute update queries against a database. By update queries, we mean inline SQL statements, or SQL statements within stored procedures, that use the UPDATE, DELETE, or INSERT keywords. You can execute these kinds of queries using the ExecuteNonQuery method of the Database class.

Like the ExecuteReader method we used earlier in this chapter, the ExecuteNonQuery method has a broad set of overloads. You can specify a CommandType (the default is StoredProcedure) and either a SQL statement or a stored procedure name. You can also pass in an array of Object instances that represent the parameters for the query. Alternatively, you can pass to the method a Command object that contains any parameters you require. There are also Begin and End versions that allow you to execute update queries asynchronously.

The following code from the example application for this chapter shows how you can use the ExecuteNonQuery method to update a row in a table in the database. It updates the Description column of a single row in the Products table, checks that the update succeeded, and then updates it again to return it to the original value (so that you can run the example again). The first step is to create the command and add the required parameters, as you've seen in earlier examples, and then call the ExecuteNonQuery method with the command as the single parameter. Next, the code changes the value of the command parameter named description to the original value in the database, and then executes the compensating update.

string oldDescription 
    = "Carries 4 bikes securely; steel construction, fits 2\" receiver hitch.";
string newDescription = "Bikes tend to fall off after a few miles.";

// Create command to execute the stored procedure and add the parameters.
DbCommand cmd = defaultDB.GetStoredProcCommand("UpdateProductsTable");
defaultDB.AddInParameter(cmd, "productID", DbType.Int32, 84);
defaultDB.AddInParameter(cmd, "description", DbType.String, newDescription);

// Execute the query and check if one row was updated.
if (defaultDB.ExecuteNonQuery(cmd) == 1)
{
  // Update succeeded.
}
else
{
    Console.WriteLine("ERROR: Could not update just one row.");
}

// Change the value of the second parameter
defaultDB.SetParameterValue(cmd, "description", oldDescription);

// Execute query and check if one row was updated
if (defaultDB.ExecuteNonQuery(cmd) == 1)
{
  // Update succeeded.
}
else
{
    Console.WriteLine("ERROR: Could not update just one row.");
}

Notice the pattern used to execute****the query and check that it succeeded. The ExecuteNonQuery method returns an integer value that is the number of rows updated (or, to use the more accurate term, affected) by the query. In this example, we are specifying a single row as the target for the update by selecting on the unique ID column. Therefore, we expect only one row to be updated—any other value means there was a problem.

If you are expecting to update multiple rows, you would check for a non-zero returned value. Typically, if you need to ensure integrity in the database, you could perform the update within a connection-based transaction, and roll it back if the result was not what you expected. We look at how you can use transactions with the Data Access block methods in the section "Working with Connection-Based Transactions" later in this chapter.

The example Update data using a Command object, which uses the code you see above, produces the following output.

Contents of row before update:
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

Contents of row after first update:
Id = 84
Name = Hitch Rack - 4-Bike
Description = Bikes tend to fall off after a few miles.

Contents of row after second update:
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

Working with DataSets

If you need to retrieve data and store it in a way that allows you to push changes back into the database, you will usually use a DataSet. The Data Access block supports simple operations on a normal (non-typed) DataSet, including the capability to fill a DataSet and then update the original database table from the DataSet.

To fill a DataSet, you use the ExecuteDataSet method, which returns a new instance of the DataSet class populated with a table containing the data for each row set returned by the query (which may be a multiple-statement batch query). The tables in this DataSet will have default names such as Table, Table1, and Table2.

If you want to load data into an existing DataSet, you use the LoadDataSet method. This allows you to specify the name(s) of the target table(s) in the DataSet, and lets you add additional tables to an existing DataSet or refresh the contents of specific tables in the DataSet.

Both of these methods, ExecuteDataSet and LoadDataSet, have a similar broad set of overloads to the ExecuteReader and other methods you've seen earlier in this chapter. You can specify a CommandType (the default is StoredProcedure) and either a SQL statement or a stored procedure name. You can also pass the values that represent the input parameters for the query. Alternatively, you can pass to the method a Command object that contains any parameters you require.

For example, the following code lines show how you can use the ExecuteDataSet method with a SQL statement; with a stored procedure and a parameter array; and with a command pre-populated with parameters. The code assumes you have created the Data Access block Database instance named db.

DataSet productDataSet;

// Using a SQL statement.
string sql = "SELECT CustomerName, CustomerPhone FROM Customers";
productDataSet = db.ExecuteDataSet(CommandType.Text, sql);

// Using a stored procedure and a parameter array.
productDataSet = db.ExecuteDataSet("GetProductsByCategory", "%bike%");

// Using a stored procedure and a named parameter.
DbCommand cmd = db.GetStoredProcCommand("GetProductsByCategory");
db.AddInParameter(cmd, "CategoryID", DbType.Int32, 7);
productDataSet = db.ExecuteDataSet(cmd);

Updating the Database from a DataSet

To update data in a database from a DataSet, you use the UpdateDataSet method, which returns a total count of the number of rows affected by the update, delete, and insert operations. The overloads of this method allow you to specify the source DataSet containing the updated rows, the name of the table in the database to update, and references to the three Command instances that the method will execute to perform UPDATE, DELETE, and INSERT operations on the specified database table.

In addition, you can specify a value for the UpdateBehavior, which determines how the method will apply the updates to the target table rows. You can specify one of the following values for this parameter:

  • Standard. If the underlying ADO.NET update process encounters an error, the update stops and no subsequent updates are applied to the target table.
  • Continue. If the underlying ADO.NET update process encounters an error, the update will continue and attempt to apply any subsequent updates.
  • Transactional. If the underlying ADO.NET update process encounters an error, all the updates made to all rows will be rolled back.

Finally, you can—if you wish—provide a value for the UpdateBatchSize parameter of the UpdateDataSet method. This forces the method to attempt to perform updates in batches instead of sending each one to the database individually. This is more efficient, but the return value for the method will show only the number of updates made in the final batch, and not the total number for all batches. Typically, you are likely to use a batch size value between 10 and 100. You should experiment to find the most appropriate batch size; it depends on the type of database you are using, the query you are executing, and the number of parameters for the query.

The examples for this chapter include one named Fill a DataSet and update the source data, which demonstrates the ExecuteDataSet and UpdateDataSet methods. It uses the simple overloads of the ExecuteDataSet and LoadDataSet methods to fill two DataSet instances, using a separate routine named DisplayTableNames (not shown here) to display the table names and a count of the number of rows in these tables. This shows one of the differences between these two methods. Note that the LoadDataSet method requires a reference to an existing DataSet instance, and an array containing the names of the tables to populate.

Dn440726.note(en-us,PandP.60).gifBeth says:
Beth You can also use the LoadDataSet method to load data into a typed dataset.
string selectSQL = "SELECT Id, Name, Description FROM Products WHERE Id > 90";

// Fill a DataSet from the Products table using the simple approach.
DataSet simpleDS = defaultDB.ExecuteDataSet(CommandType.Text, selectSQL);
DisplayTableNames(simpleDS, "ExecuteDataSet");

// Fill a DataSet from the Products table using the LoadDataSet method.
// This allows you to specify the name(s) for the table(s) in the DataSet.
DataSet loadedDS = new DataSet("ProductsDataSet");
defaultDB.LoadDataSet(CommandType.Text, selectSQL, loadedDS, 
                      new string[] { "Products" });
DisplayTableNames(loadedDS, "LoadDataSet");

This produces the following result.

Tables in the DataSet obtained using the ExecuteDataSet method:
 - Table named 'Table' contains 6 rows.

Tables in the DataSet obtained using the LoadDataSet method:
 - Table named 'Products' contains 6 rows.

The example then accesses the rows in the DataSet to delete a row, add a new row, and change the Description column in another row. After this, it displays the updated contents of the DataSet table.

// get a reference to the Products table in the DataSet.
DataTable dt = loadedDS.Tables["Products"];

// Delete a row in the DataSet table.
dt.Rows[0].Delete();

// Add a new row to the DataSet table.
object[] rowData = new object[] { -1, "A New Row", "Added to the table at " 
                                  + DateTime.Now.ToShortTimeString() };
dt.Rows.Add(rowData);

// Update the description of a row in the DataSet table.
rowData = dt.Rows[1].ItemArray;
rowData[2] = "A new description at " + DateTime.Now.ToShortTimeString();
dt.Rows[1].ItemArray = rowData;

// Display the contents of the DatSet.
DisplayRowValues(dt);

This produces the following output. To make it easier to see the changes, we've omitted the unchanged rows from the listing. Of course, the deleted row does not show in the listing, and the new row has the default ID of -1 that we specified in the code above.

Rows in the table named 'Products':

Id = 91
Name = HL Mountain Frame - Black, 44
Description = A new description at 14:25

...

Id = -1
Name = A New Row
Description = Added to the table at 14:25

The next stage is to create the commands that the UpdateDataSet method will use to update the target table in the database. The code declares three suitable SQL statements, and then builds the commands and adds the requisite parameters to them. Note that each parameter may be applied to multiple rows in the target table, so the actual value must be dynamically set based on the contents of the DataSet row whose updates are currently being applied to the target table.

This means that you must specify, in addition to the parameter name and data type, the name and the version (Current or Original) of the row in the DataSet to take the value from. For an INSERT command, you need the current version of the row that contains the new values. For a DELETE command, you need the original value of the ID to locate the row in the table that will be deleted. For an UPDATE command, you need the original value of the ID to locate the row in the table that will be updated, and the current version of the values with which to update the remaining columns in the target table row.

string addSQL = "INSERT INTO Products (Name, Description) " + VALUES (@name, @description)";
string updateSQL = "UPDATE Products SET Name = @name, "
                 + "Description = @description WHERE Id = @id";
string deleteSQL = "DELETE FROM Products WHERE Id = @id";

// Create the commands to update the original table in the database
DbCommand insertCommand = defaultDB.GetSqlStringCommand(addSQL);
defaultDB.AddInParameter(insertCommand, "name", DbType.String, "Name",
                         DataRowVersion.Current);
defaultDB.AddInParameter(insertCommand, "description", DbType.String, 
                         "Description", DataRowVersion.Current);

DbCommand updateCommand = defaultDB.GetSqlStringCommand(updateSQL);
defaultDB.AddInParameter(updateCommand, "name", DbType.String, "Name",
                         DataRowVersion.Current);
defaultDB.AddInParameter(updateCommand, "description", DbType.String, 
                         "Description", DataRowVersion.Current);
defaultDB.AddInParameter(updateCommand, "id", DbType.String, "Id",
                         DataRowVersion.Original);

DbCommand deleteCommand = defaultDB.GetSqlStringCommand(deleteSQL);
defaultDB.AddInParameter(deleteCommand, "id", DbType.Int32, "Id",
                         DataRowVersion.Original);

Finally, you can apply the changes by calling the UpdateDataSet method, as shown here.

// Apply the updates in the DataSet to the original table in the database.
int rowsAffected = defaultDB.UpdateDataSet(loadedDS, "Products",
                   insertCommand, updateCommand, deleteCommand,
                   UpdateBehavior.Standard);
Console.WriteLine("Updated a total of {0} rows in the database.", rowsAffected);

The code captures and displays the number of rows affected by the updates. As expected, this is three, as shown in the final section of the output from the example.

Updated a total of 3 rows in the database.

Managing Connections

For many years, developers have fretted about the ideal way to manage connections in data access code. Connections are scarce, expensive in terms of resource usage, and can cause a big performance hit if not managed correctly. You must obviously open a connection before you can access data, and you should make sure it is closed after you have finished with it. However, if the operating system does actually create a new connection, and then closes and destroys it every time, execution in your applications would flow like molasses.

Instead, ADO.NET holds a pool of open connections that it hands out to applications that require them. Data access code must still go through the motions of calling the methods to create, open, and close connections, but ADO.NET automatically retrieves connections from the connection pool when possible, and decides when and whether to actually close the underlying connection and dispose it. The main issues arise when you have to decide when and how your code should call the Close method. The Data Access block helps to resolve these issues by automatically managing connections as far as is reasonably possible.

Dn440726.note(en-us,PandP.60).gifMarkus says:
Markus The Data Access Application Block helps you to manage your database connections automatically.

When you use the Data Access block to retrieve a DataSet, the ExecuteDataSet method automatically opens and closes the connection to the database. If an error occurs, it will ensure that the connection is closed. If you want to keep a connection open, perhaps to perform multiple operations over that connection, you can access the ActiveConnection property of your DbCommand object and open it before calling the ExecuteDataSet method. The ExecuteDataSet method will leave the connection open when it completes, so you must ensure that your code closes it afterwards.

In contrast, when you retrieve a DataReader or an XmlReader, the ExecuteReader method (or, in the case of the XmlReader, the ExecuteXmlReader method) must leave the connection open so that you can read the data. The ExecuteReader method sets the CommandBehavior property of the reader to CloseConnection so that the connection is closed when you dispose the reader. Commonly, you will use a using construct to ensure that the reader is disposed, as shown here:

using (IDataReader reader = db.ExecuteReader(cmd))
{
  // use the reader here
}

Note

This code, and code later in this section, assumes you have created the Data Access block Database instance named db and a DbCommand instance named cmd.

Typically, when you use the ExecuteXmlReader method, you will explicitly close the connection after you dispose the reader. This is because the underlying XmlReader class does not expose a CommandBehavior property. However, you should still use the same approach as with a DataReader (a using statement) to ensure that the XmlReader is correctly closed and disposed.

using (XmlReader reader = db.ExecuteXmlReader(cmd))
{
  // use the reader here
}

Finally, if you want to be able to access the connection your code is using, perhaps to create connection-based transactions in your code, you can use the Data Access block methods to explicitly create a connection for your data access methods to use. This means that you must manage the connection yourself, usually through a using statement as shown below, which automatically closes and disposes the connection:

using (DbConnection conn = db.CreateConnection())
{
  conn.Open();
  try
  {
    // perform data access here
  }
  catch
  {
    // handle any errors here
  }
}

Working with Connection-Based Transactions

A common requirement in many applications is to perform multiple updates to data in such a way that they all succeed, or can all be undone (rolled back) to leave the databases in a valid state that is consistent with the original content. The traditional example is when your bank carries out a monetary transaction that requires them to subtract a payment from one account and add the same amount to another account (or perhaps slightly less, with the commission going into their own account).

Transactions should follow the four ACID principles. These are Atomicity (all of the tasks of a transaction are performed or none of them are), Consistency (the database remains in a consistent state before and after the transaction), Isolation (other operations cannot access or see the data in an intermediate state during a transaction), and Durability (the results of a successful transaction are persisted and will survive system failure).

You can execute transactions when all of the updates occur in a single database by using the features of your database system (by including the relevant commands such as BEGIN TRANSACTION and ROLLBACK TRANSACTION in your stored procedures). ADO.NET also provides features that allow you to perform connection-based transactions over a single connection. This allows you to perform multiple actions on different tables in the same database, and manage the commit or rollback in your data access code.

All of the methods of the Data Access block that retrieve or update data have overloads that accept a reference to an existing transaction as a DbTransaction type. As an example of their use, the following code explicitly creates a transaction over a connection. It assumes you have created the Data Access block Database instance named db and two DbCommand instances named cmdA and cmdB.

using (DbConnection conn = db.CreateConnection())
{
  conn.Open();
  DbTransaction trans = conn.BeginTransaction();

  try
  {
    // execute commands, passing in the current transaction to each one
    db.ExecuteNonQuery(cmdA, trans);
    db.ExecuteNonQuery(cmdB, trans);
    trans.Commit();    // commit the transaction
  }
  catch
  {
    trans.Rollback();  // rollback the transaction
  }
}
Dn440726.note(en-us,PandP.60).gifCarlos says:
Carlos The DbTransaction class enables you to write code that requires transactions and is provider independent.

The examples for this chapter include one named Use a connection-based transaction, which demonstrates the approach shown above. It starts by displaying the values of two rows in the Products table, and then uses the ExecuteNonQuery method twice to update the Description column of two rows in the database within the context of a connection-based transaction. As it does so, it displays the new description for these rows. Finally, it rolls back the transaction, which restores the original values, and then displays these values to prove that it worked.

Contents of rows before update:

Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable closure.

Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

-------------------------------------------------------------------------------
Updated row with ID = 53 to 'Third and little fingers tend to get cold.'.
Updated row with ID = 84 to 'Bikes tend to fall off after a few miles.'.
-------------------------------------------------------------------------------

Contents of row after rolling back transaction:

Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable closure.

Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

Working with Distributed Transactions

If you need to access different databases as part of the same transaction (including databases on separate servers), of if you need to include other data sources such as Microsoft Message Queuing (MSMQ) in your transaction, you must use a distributed transaction coordinator (DTC) mechanism such as Windows Component Services. In this case, you just perform the usual data access actions, and configure your components to use the DTC. Commonly, this is done through attributes added to the classes that perform the data access.

However, ADO.NET supports the concept of automatic transactions through the TransactionScope class. You can specify that a series of actions require transactional support, but ADO.NET will not generate an expensive distributed transaction until you actually open more than one connection within the transaction scope. This means that you can perform multiple transacted updates to different tables in the same database over a single connection. As soon as you open a new connection, ADO.NET automatically creates a distributed transaction (using Windows Component Services), and enrolls the original connections and all new connections created within the transaction scope into that distributed transaction. You then call methods on the transaction scope to either commit all updates, or to roll back (undo) all of them.

Dn440726.note(en-us,PandP.60).gifJana says:
Jana Distributed transactions typically consume significant system resources. The TransactionScope class ensures that the application only uses a full distributed transaction if necessary.

Therefore, once you create the transaction scope, you use the Data Access block methods in exactly the same way as you would outside of a transaction. You do not need to pass the transaction scope to the methods as you would when using ADO.NET methods directly. For example, the methods of the Data Access Application Block automatically detect if they are being executed within the scope of a transaction. If they are, they enlist in the transaction scope and reuse the existing connection (because opening a new one might force Component Services to start a distributed transaction), and do not close the connection when they complete. The Data Access block will close and dispose the connection when the transaction is completed.

Typically, you will use the TransactionScope class in the following way:

using (TransactionScope scope 
        = new TransactionScope(TransactionScopeOption.RequiresNew))
{
  // perform data access here
}

For more details about using a DTC and transaction scope, see "Distributed**Transactions (ADO.NET)" at and "System.Transactions Integration with SQL Server (ADO.NET)."

The examples for this chapter contain one named Use a TransactionScope for a distributed transaction, which demonstrates the use of a TransactionScope with the Data Access block. It performs the same updates to the Products table in the database as you saw in the previous example of using a connection-based transaction. However, there are subtle differences in the way this example works.

In addition, as it uses the Microsoft Distributed Transaction Coordinator (DTC) service, you must ensure that this service is running before you execute the example; depending on your operating system it may not be set to start automatically. To start the service, open the Services MMC snap-in from your Administrative Tools menu, right-click on the Distributed Transaction Coordinator service, and click Start. To see the effects of the TransactionScope and the way that it promotes a transaction, open the Component Services MMC snap-in from your Administrative Tools menu and expand the Component Services node until you can see the Transaction List in the central pane of the snap-in.

When you execute the example, it creates a new TransactionScope and executes the ExecuteNonQuery method twice to update two rows in the database table. At this point, the code stops until you press a key. This gives you the opportunity to confirm that there is no distributed transaction—as you can see if you look in the transaction list in the Component Services MMC snap-in.

After you press a key, the application creates a new connection to the database (when we used a connection-based transaction in the previous example, we just updated the parameter values and executed the same commands over the same connection). This new connection, which is within the scope of the existing TransactionScope instance, causes the DTC to start a new distributed transaction and enroll the existing lightweight transaction into it; as shown in Figure 2.

Figure 2 - Viewing DTC transactions

Follow link to expand image

The code then waits until you press a key again, at which point it exits from the using clause that created the TransactionScope, and the transaction is no longer in scope. As the code did not call the Complete method of the TransactionScope to preserve the changes in the database, they are rolled back automatically. To prove that this is the case, the code displays the values of the rows in the database again. This is the complete output from the example.

Contents of rows before update:

Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable closure.

Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

-------------------------------------------------------------------------------
Updated row with ID = 53 to 'Third and little fingers tend to get cold.'.
No distributed transaction. Press any key to continue...

Updated row with ID = 84 to 'Bikes tend to fall off after a few miles.'.
New distributed transaction created. Press any key to continue...
-------------------------------------------------------------------------------

Contents of row after disposing TransactionScope:

Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable closure.

Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver hitch.

This default behavior of the TransactionScope ensures that an error or problem that stops the code from completing the transaction will automatically roll back changes. If your code does not seem to be updating the database, make sure you remembered to call the Complete method!

Extending the Block to Use Other Databases

The Data Access block contains providers for SQL Server, Oracle, and SQL Server Compact Edition. However, you can extend the block to use other databases if you wish. Writing a new provider is not a trivial task, and you may find that there is already a third party provider available for your database. For example, at the time of writing, the Enterprise Library Community Contribution site listed providers for MySql and SQLite databases. For more information, visit the EntLib Contrib Project site.

If you decide to create a new provider, you can create a new class derived from the Enterprise Library Database class and override its methods to implement the appropriate functionality. One limiting factor is that there must be an ADO.NET provider available for your database. The Database class in Enterprise Library relies on this to perform data access operations.

You must also be aware of the differences between database functionality, and manage these differences in your code. For example, you must handle return values, parameter prefixes (such as "@"), data type conversions, and other relevant factors. However, you can add additional methods to your provider to take advantage of features of your target database that are not available for other database types. For example, the SQL Server provider in the Data Access block exposes a method that uses the SQLXML functionality in SQL Server to extract data in XML format.

For more information on creating additional database providers for the Data Access Application Block, see the Enterprise Library Reference Documentation.

Summary

This chapter discussed the Data Access Application Block; one of the most commonly used blocks in Enterprise Library. The Data Access block provides two key advantages for developers and administrators. Firstly, it abstracts the database so that developers and administrators can switch the application from one type of database to another with only changes to the configuration files required. Secondly, it helps developers by making it easier to write the most commonly used sections of data access code with less effort, and it hides some of the complexity of working directly with ADO.NET.

In terms of abstracting the database, the block allows developers to write code in such a way that (for most functions) they do not need to worry which database (such as SQL Server, SQL Server CE, or Oracle) their applications will use. They write the same code for all of them, and configure the application to specify the actual database at run time. This means that administrators and operations staff can change the targeted database without requiring changes to the code, recompilation, retesting, and redeployment.

In terms of simplifying data access code, the block provides a small number of methods that encompass most data access requirements, such as retrieving a DataSet, a DataReader, a scalar (single) value, one or more values as output parameters, or a series of XML elements. It also provides methods for updating a database from a DataSet, and integrates with the ADO.NET TransactionScope class to allow a range of options for working with transactions. However, the block does not limit your options to use more advanced ADO.NET techniques, as it allows you to access the underlying objects such as the connection and the DataAdapter.

The chapter also described general issues such as managing connections and integration with transactions, and explored the actual capabilities of the block in more depth. Finally, we looked briefly at how you can use the block with other databases, including those supported by third-party providers.

More Information

All links in this book are accessible from the book's online bibliography on MSDN at https://aka.ms/el6biblio.

For more information on a variety of topics, see the Enterprise Library Contribsite.

For information on using Data Transfer Objects (DTOs) to pass data around your application layers, implement a data access layer using O/RM techniques, or want to take advantage of new client-side data querying techniques such as LINQ.

For more information on using LINQ to query sequences, see "LINQ Query Expressions" on MSDN.

For a description of the capabilities of SQLXML, see SQLXML on MSDN.

For more information about using an XmlReader, see "Reading XML with the XmlReader" on MSDN.

For more information on XSLT, see "XSLT Transformations" on MSDN.

You can now use the Task Parallel Library and the async/awaitkeywords to greatly simplify some of your asynchronous code.

For more information about asynchronous data access, see "Asynchronous Operations" on MSDN.

For guidance on performance testing and setting performance goals see patterns & practices "Performance Testing Guidance for Web Applications."

For more details about using a DTC and transaction scope, see "Distributed**Transactions (ADO.NET)" at and "System.Transactions Integration with SQL Server (ADO.NET)."

General Links:

Next Topic | Previous Topic | Home | Community