Freigeben über


Introducing Multiple Active Result Sets (MARS) (1)

MARS is a new programming interface introduced in SQL Server 2005. It enables multiple result sets to exist at the same time for a given connection. To say it in a simple way, you can make a connection to server and then submit multiple requests to server at the same time, and then read results from these requests in whatever way you want, and you don’t get “connection busy” error any more!

 

Sounds wonderful, right? But wait! Before you dive into MARS, there are a lot of things you need to be aware. I will give a series of talks about how to use MARS correctly. This first talk gives you three basic things you need to keep in mind, and there will have more later on.

 

First you need to understand what a unit of work is from server and from client point of view.

 

From client application point of view, you send a ‘command’ to server, call Execute() on the ‘command’, and then retrieve result(s) from it. The result can be a data set, a scalar value, or maybe with error/warning messages. So the unit of work, from client point of view, is a command, or a request.

 

But once the command is sent to server, server treats it differently. SQL Server thinks the request is a batch of statements. And server processes these statements one by one, without taking the fact that they are sent in one batch into account too much. And server decides whether or not MARS is allowed by looking at the actual statement running in the two requests, not at batch granularity. Please keep this in mind as this helps you understand MARS better later on.

 

Second you need to understand the difference between sending multiple requests to server inside one connection/session and sending multiple requests to server through different connections/sessions.

 

If you send multiple requests through one connection to server, then these requests can share connection wide resources, such as database context, security context, transaction, set options, cursors, local temporary tables etc. But multiple requests sent through different connections cannot share these connection specific resources.

 

Why do you need to care about these shared resources? Well, it is always good to share, but you need to make sure you are not messing up all these shared resources by changing the shared state randomly in your requests. For example, you have a table t1 defined in two databases, DB1 and DB2, and the connection is currently in DB1. Then you send two requests to server:

            Request 1:

                        UPDATE t1 set col1=2

                        USE DB2

                        UPDATE t1 set col2 = ‘3435’

 

            Request 2:

                        UPDATE t1 set col2 = ‘3333’

                        INSERT t1 values (10, ‘5555’)

                       

Since you are changing database context in the middle of one request by using USE <DBNname> statement, it is clear that you want the statements following that USE statement to run under the new database, but which database do you expect your second request to run under, DB1 or DB2?

 

I will talk about how to make sure you are not causing in-deterministic behavior in later talk. This is to just give you an initial thinking.

 

The third thing you need to understand is that multiple active requests are not actually running in parallel, they are running in an interleaving way. From client application point of view, server supports running multiple requests in parallel, and client can pull result from multiple requests at their will. But at server, out of multiple requests sent to server, only one request can actually make progress at a given time, other requests have to wait for this request to give up its connection resource.

 

So when would a request give up its connection resource? When the request writes result to network and is waiting for client to pull the result from server. Take a simple example, you send two SELECT requests to server in two SqlCommand object that are in the same connection,

            // create two commands that are in the same connection conn1

            SqlCommand cmd1 = new SqlCommand(“SELECT * FROM DB1.dbo.t1”, conn1, …);

            SqlCommand cmd2 = new SqlCommand(“SELECT * FROM DB2.dbo.t1”, conn1, …);

 

            // Send the two requests to server

            SqlDataReader* reader1 = cmd1.ExecuteReader();

            SqlDataReader* reader2 = cmd2.ExecuteReader();

 

            // Read all results from first command

            while (reader1->Read())

                        Console::WriteLine(reader1->GetString(1));

 

            // Read all results from second command

            while (reader2->Read())

                        Console::WriteLine(reader2->GetString(1));

 

After you sent cmd1.ExecuteReader(), server starts to run the command SELECT * FROM DB1.dbo.t1, and it sends the result to client, since client is not reading the result yet, server thread is blocked in writing packets, so it gives up the connection resource to let other request in the same connection to run. Now cmd2.ExecuteReader() picks up the connection resource, run the SELECT, and sends result to client. Since now client tries to read result from first command, the server thread that processes second SELECT is again blocked, so it gives up the connection resource to let first SELECT continue to run.

Now client wants to read all results from first command, so server keeps sending the result to client until it reaches the end. When the first request finishes, it again gives up the connection resource to let the second request continue to run to the end.

 

There are a lot of details in yielding/resume logic, but that’s too much for this talk. We will cover this in next talk.