Compartilhar via


SQL Updategrams (Insert/Update/Delete) in BizTalk: Part II

Yesterday (see Part I) we looked at the first part of a BizTalk and SQL Server updategram project. The process will enable a single updategram to process inserts, updates AND deletes in one transaction. We've already set up a base project that contained all of the ports and message parts needed to use an insert/update/delete updategram for a particular table. Today's piece looks at how to get the whole BizTalk project working.

The very first thing we do here is make a change to our updategram schema in order to support multiple record updates. I use the standard SQL updategram updg:id field as my unique identifier. This is what helps SQL Server resolve which before block coincides with which after block. Please see here for some very nice instructions on how to get a base updategram required field into your updategram schema. Notice the element updg:id identified in my BizTalk-generated CustomerUpdate schema here.

Now add a new BizTalk project to the solution. Add a reference to the "base" project created previously. In this new project we add a schema that represents the "customers" that we are interacting with. In the picture below, you can see that I have a repeating Customer node that contains a DatabaseAction element which identifies whether or not that customer is to be inserted, updated or deleted in the database.

The next big piece of the equation is the XSLT map that builds the actual updategram message. On the left side of the map is our recently created "customer" schema. On the right, we use the "update" edition of the updategram schema in the referenced project. Why the "update" one? If you're not familiar with updategrams, the key concept is that you have a before and after section in the updategram message that SQL Server uses to figure out what to do with the data. For instance, having a record in the before but not in the after is equal to a Delete operation in SQL Server. Also, an after record with no before equals an Insert and a record in both the before and after results in an Update. So the core goal of this map is to add appropriate sections to the before and after sections to achieve the desired outcome in the database. I created two pages in the map in order to separate the logic. The first page contains the looping functoid with a link from the source customer node and two links into the destination customer nodes in the before and after sections of the updategram.

The second page on my map contains links between the all the source fields, and the corresponding fields in the destination after block. The only field mapped to the destination before block is the customerID since that's all I really need from the original record (in the case of an update operation). Now, the key to the looping functoid creating the correct number of and instances of records lies in the usage of the Logical Equal functoids that I use. In a nutshell, the logic is:

  • If the DatabaseAction equals either Update or Delete, then create the before updategram node. Remember from our conditions above, the update/delete scenario requires a before block.
  • If the Database Action equals either Update or Insert, then create the after updategram node.

When a Looping functoid and Logical Equals functoid both connect to the same destination node, a looping condition is created. So, just because there are 10 customers in my source doesn't mean the Looping functiod will create 10 before or after blocks unless that's what the conditions dictates. Makes sense?

Now we simply build our orchestration that will do the data update. Create 3 orchestration messages: one for the customer schema, one that points to the referenced project's multi-part message UpdateCustomer request schema, and one that points to the referenced project's multi-part message UpdateCustomer response schema. These message parts showed up in the message picker because we previously marked them as public. Then build out the simple process flow, and use the port types from the referenced project to do the send/receive to SQL.

Build and deploy the base project, and then do the same with the primary project. Create the necessary send/receive ports and locations for the primary project and bind it. Note that you do not have to bind or enlist the base orchestrations that hold the shared port types and message parts. Then create an instance of the "customer" schema and populate it with data and the appropriate database action. There you go, an updategram that can do mass operations of all types in a single table. Neato.

If you're interested in any of the source code, lemme know.

Comments

  • Anonymous
    December 07, 2005
    Very nice. Very straight forward. Taught me alot. Thanks

  • Anonymous
    December 07, 2005
    I have a mapping problem. I simply don't know how to do it. Maybe you could help out or point me to a good article that teaches how to use the mapper.

    I have two messages, one message is from a flatfile (MessageA) and another message is from a database (MessageB). Now, MessageA has two columns (Code1 and Code2). MessageB has (ID, CodeIndex). I want to join these two messages into one and insert into a database table. The target table is (ID, Code). So depending on CodeIndex from MessageB, I either join Code1 or Code2. Here's sample data.

    Message A
    -----------
    Code1, Code2
    111, 222
    333, 444

    Message B
    -----------
    ID, CodeIndex
    1, '03'
    2, '04'
    3, '03'
    4, '04'

    What I want to insert into database
    the rule is, if CodeIndex='03' then use Code1
    if codeIndex='04' then use code2
    -----------------------------------
    ID, Code
    1, 111
    1, 333
    2, 222
    2, 444
    3, 111
    3, 333
    4, 222
    4, 444

    I am sure it's not that hard with the mapper, I just don't know enough about it and I haven't come upon articles that explain it in depth.

    Thanks

  • Anonymous
    December 09, 2005
    Interesting. Honestly, I'd probably end up writing my own XSLT using the Scripting functoid. Some of that complex looping is tough using a graphical interface, and just digging in with the XSL is much easier.

  • Anonymous
    January 12, 2006
    Can one have the response schema be populated with the record(s) that were inserted, complete with any additional columns that the database would have populated, such as the Primary (Identity) Key?

  • Anonymous
    January 23, 2006
    Good question. I'll honestly have to dig in and look around. Doesn't seem so, so you'd be better off using a stored proc that returns the full dataset (included new identity keys).

  • Anonymous
    March 21, 2006
    The comment has been removed

  • Anonymous
    June 23, 2006


    Just a bit over a year since I started BizTalk blogging, so I thought I'd take 5 minutes and review...

  • Anonymous
    August 29, 2006
    Hi,. Good article- Is it possible to download this project?

  • Anonymous
    August 30, 2006
    Thanks Koifo, I actually don't have the project still available after rebuilding a VM and forgetting to back up a few projects.  Hopefully I included enough steps here that you could recreate it fairly easily.

  • Anonymous
    October 12, 2006
    Thanks for valuable information,Yes i would like to source samples code just for practice in my system. As it seems pretty good for learners.

  • Anonymous
    October 13, 2006
    See two comments up.  Don't have the source for the project after rebuilding my virtual machine.  One of these days I'll build it again!   Per your email question, to do an "insert, update, delete" into the same table, you'd need the right combination of "before/after" blocks.  Usually when there's an error processing, and you see the "The Messaging Engine has suspended "1" message(s) from adapter 'FILE' due to failures in message processing", check the event log entry right after this.  That one usually contains more detailed information about what actually went wrong.

  • Anonymous
    June 13, 2009
    PingBack from http://firepitidea.info/story.php?id=1010