次の方法で共有


BizTalk SQL Adapter Advice Requested

So I may be demonstrating my SQL Server ignorance, but I need some advice from you all.

Let's say I have the scenario I've drawn out below:

That is, I have the SQL adapter running on two BizTalk Servers, each executing a stored procedure every 2 seconds. That procedure needs to return them the TOP50 rows in a defined table. The problem that two of my customers have bumped into, and to which I don't feel I've given a rock-star answer for, centers around making sure that one SQL adapter doesn't pull the same data that the other adapter has already pulled. Now, of course you have to change a flag after you've read the data, to make sure it doesn't get polled again, but let's say that the adapters get into a cycle where one polls for data less than a second after another. If the first adapter hasn't switched the flag yet, the second adapter may pull that data again.

One possible answer is to create a transaction in the stored procedure which contains both the SELECT and UPDATE commands. However, this could potentially introduce table locking issues and prevent additional data from being inserted into said table. So, I'm not in love with that solution.

So for those of you who are either (a) SQL gurus, or (b) successful implementers of this scenario, what thoughts do you have? Maybe I'm just brain-cramping for the holidays, but I still think this is a useful discussion.

Comments

  • Anonymous
    November 22, 2005
    Without being a sql adapter guru, my question would be: why have the two servers poll the same data in the same database?

    If both BTS servers are part of the same biztalk group, I would then prefer to separate the receive side to a single server and maybe let both servers process the data then in orchestrations or whatever (this can be done fairly simply by creating a new host with a single host instance and then binding the sql adapter and your receive location to the new host).
  • Anonymous
    November 22, 2005
    Ahh, concurrency lives. I don't think your getting around this without some sort of locking.
  • Anonymous
    November 22, 2005
    The comment has been removed
  • Anonymous
    November 22, 2005
    Here's some snippet. May be overkill, but it'll do the job. It's a bit convoluted to overcome fact that "begin transaction" by itself doesn't prevent "select" from returning same row to multiple sessions at same time ...

    create table dbo.Test (
    [id] int identity(1,1) not null,
    status char(1) not null,
    value float not null,
    [bit] bit not null default 0
    )
    go

    insert into dbo.Test ( status, value )
    values ( 'N', rand() );
    insert into dbo.Test ( status, value )
    values ( 'N', rand() );
    insert into dbo.Test ( status, value )
    values ( 'N', rand() );
    insert into dbo.Test ( status, value )
    values ( 'N', rand() );
    insert into dbo.Test ( status, value )
    values ( 'N', rand() );
    go

    create procedure dbo.usp_NextTest
    as begin

    declare @table table (
    [id] int not null )

    begin transaction

    update dbo.Test
    set status = 'I', [bit] = 1
    where [id] in (
    select top 2 [id] from dbo.Test
    where status = 'N' )

    insert into @table
    select [id] from dbo.Test
    where [bit] = 1;

    update dbo.Test
    set [bit] = 0;

    commit work

    select [id], status, value from dbo.Test
    where [id] in
    ( select [id] from @table )
    for xml auto

    end;


    You could get away w/ just 'I'/'N' if you could be absolutely certain that there were no 'N' rows when you started.

    By using / updating [bit] column to 1 then back to 0 in transaction, only current session "sees" the 1's.

    Anyway, maybe there's a better way, but this is what I did and it seems to behave.
  • Anonymous
    November 22, 2005
    How about simply having a control table. Part of the stored procedure that retrieves the data looks in the control table to see if anything is working on the data at present. If so, then ignore this time. When the other proc has finished working with the data, update the control table.
  • Anonymous
    November 22, 2005
    The comment has been removed
  • Anonymous
    November 28, 2005
    One SQL solution would be to have a "BatchId" column. Then your stored procedure first sets BatchId to a pre-defined Batch or a RAND for the new rows then SELECTS * where BatchId=theBatch. This way any row will be assigned to a Batch and each Batch is processed completely by a single Adapter transaction.

    This also gives you the tracability you want in a reliable message model as well.
  • Anonymous
    November 28, 2005
    Good call Kevin. That's not bad. I've also received some feedback offline about alternate ways to handle it. Specifically, using a "control" table that the stored proc accesses to check whether another SQL adapter is currently processing records, and if so, skips its turn. However, I think I like your batch ID more.
  • Anonymous
    November 29, 2005
    Well how about this one

    declare @currentguid uniqueidentifier
    set @currentguid = newid()
    begin tran
    set rowcount 50
    update testpickup set processed = @currentguid where processed is NULL
    set rowcount 0
    select * from testpickup where processed = @currentguid
    commit tran

    Make your processed field a uniqueidentifier.
  • Anonymous
    February 01, 2006
    Hi, the way I solved this and implemented it on a number of projects was to initiate the call to the SQL store proc from inside the orchestration rather than depend on the adaptor auto update interval. I used a task scheduler (similar to the one available on gotdonet) to start this orchestration, then the orchestration will make the call through a sql port to a store proc, then have another orchestration that subscribes to the message type correlating to a single record. This worked perfectly everytime; don't have any concurrency issues, and you have your scalability. Hope this helps
  • Anonymous
    August 28, 2006
    There are many typical scenarios in which Service Broker is the ideal solution for interactions between
  • Anonymous
    August 29, 2006
    Here's a query that's worked for us @ eBI Solutions...

    -- CREATE AN IN-MEMORY TEMPORARY TABLE
    DECLARE @tmpTableOfKeyFields TABLE
    (
    record_id int
    )

    -- PUT THE RECORDS THAT WE'RE GOING TO PROCESS INTO A TEMPORARY TABLE
    INSERT INTO @tmpTableOfKeyFields
    SELECT
    record_id
    FROM Table1
    WHERE status = 'READY'

    -- RETURN THE RESULT SET
    SELECT
    record_id,
    status,
    data
    FROM Table1 WITH (ROWLOCK)
    WHERE record_id in
    (
    SELECT DISTINCT record_id FROM @tmpTableOfKeyFields
    )
    FOR XML AUTO, ELEMENTS

    -- UPDATE THE RECORD SET TO PENDING
    UPDATE Table1
    SET status = 'PEND',
    modified = getdate()
    FROM Table1
    INNER JOIN @tmpTableOfKeyFields TableOfKeyFields ON Table1.record_id = TableOfKeyFields.record_id

    Lucas and I just demo'd this on today's BizTalk MVP Webcast. ;)

    Jay Lee
    eBI Solutions, LLC
  • Anonymous
    August 30, 2006
    Ladies and gentlemen, Jay Lee.  Thanks Jay for the comment.
  • Anonymous
    June 16, 2009
    PingBack from http://topalternativedating.info/story.php?id=8218