Partager via


Business Logic in the Database?

Business Logic in the Database?

One of the basic tenants of three-tiered architectures was that business logic was a middle-tier thing and should never be in the database. Stored procedures were a hold-over from the bad-old days of client server and while they might be OK for speeding up certain operations, they should never contain anything that looked like business logic.

This separation was done for very sound reasons. Keeping all the business logic in one place made maintenance easier and then tools available for middle-tier code had much better version control and debugging facilities. Stored procedures that contained fragments of business logic could make it very difficult for the middle-tier developer to determine what happened when a database call was made. Stored procedure code and middle-tier code were often written and maintained by different teams so coordination could be a problem. Doing significant amounts of business code in stored procedures often involved calls to extended stored procedures which are difficult to debug and have the potential to take the whole database server down. And on top of all this, TSQL is a poor language for writing complex business logic.

The burning question is then “does SQL Server 2005 change this?” A huge amount of development effort went into hosting the Common Language Runtime (CLR) in SQL Server 2005 and the Visual Studio team added a lot of features to make developing, maintaining and debugging stored procedures and triggers in the CLR languages a rich experience. Does this make business logic in the database a reasonable thing to do?

I think the most compelling use cases for CLR stored procedures are replacing extended stored procedures and speeding up processor-intensive operations in stored procedures. Processor intensive operations include things like complex math, string manipulation, and XML transformations. These things were either impossible of very slow in TSQL. Using CLR stored procedures for these things is obviously a good thing but for the most part, this doesn’t qualify as business logic in the database.

I still think splitting business logic between the middle tier and stored procedures is not a good thing from understandability, debugging and maintenance perspectives. CLR stored procedures and the new VS features make this more palatable however so there may be cases where splitting the business logic makes sense. Some of the things that used to be part of the data layer in middle tier applications may make sense. Using the Service Broker to make these data-intensive parts of the application reliable and asynchronous might also be a significant advantage.

Moving whole business services to the database can be very compelling in service oriented applications. Because the whole service runs in the database, most of the objections to splitting business logic between the middle tier and the database don’t apply. If these are very data intensives services – essentially fairly thin wrappers around database data – then running the service in the database can have significant performance advantages. Using the reliable queuing, activation, and communication features of service broker to manage these services makes them reliable, asynchronous and easy to implement. Another significant advantage of this architecture is that the data, the code, the messages, the security and the configuration are all contained in the same database and maintained with transactional consistency. This means that a service can be backed up, restored, failed-over, and deployed in its entirety because it’s all in the database. A whole service, its data, and its in-progress work can be moved from one database server to another by moving the database without losing any work in progress.

This doesn’t mean that all services should be moved to the database tier. In most computer rooms, the most expensive processor cycles are the ones on the in the main database server. It may not make sense to have your 32 proc 64 bit database server spending its time amortizing loans or formatting web pages. This is especially true for services that do extensive disk or network IO. Do you want your expensive database threads waiting for a response from an EBay bid?

There are two new ways to handle logic you don’t want to run on your expensive database server. The first is to use an external application to receive and process service broker messages that involve a lot of processing or IO. The service broker queues make the calls asynchronous and reliable but the external message processing logic keeps this processor intensive logic out of your database processor. The second approach is to move both the processing and the data to an external machine. You don’t need to waste the main database’s time serving up catalog entries to web pages or reporting on quarterly sales. You can move reference data that changes slowly to many smaller database machines that can service requests more cheaply. This not only makes these requests faster but takes some of the less critical work off your main database server. Replication can keep this data up to date and service broker can be used to manage and communicate with these services.

The bottom line is that where to put your business logic is not as simple a decision as it was before SQL Server 2005. There are now compelling reasons to run some of the more data-intensive parts of your business logic in the database. Service Oriented Architectures make this easier to do and more compelling but SOA is not required for this approach. Let me know if any of this changes the way you think about your business logic.

Comments

  • Anonymous
    April 05, 2006
    I like your comments but the problem I see has to do with potential conflicts.  I would like to see more information on how that would be handled especially when replication is involved.
  • Anonymous
    April 05, 2006
    I'm not sure I understand what kind of conflicts you're asking about.  The location of the business logic shouldn't make conflicting data updates any more or less likely.  If you are making copies of the database for sclaeout purposes then there are obviously conflicting update issues as there would be any time there are multiple copies of the same data.  The simplest way to handle that is to make only one copy read-write and make the other copies read-only copies use for query performance.  This only works if the update load is reasonably light.  In a heavy update situation, I would recommend partitioning the data instead of replicating it.  I'm finishing up a paper on scaling out SQL Server that covers the alternatives and tradeoffs.  I'll post something when it's published.
  • Anonymous
    April 12, 2006
    Ah, I understand now (sorry to be confusing).  I will also check back for your paper as it would be of interest to me.
  • Anonymous
    May 28, 2009
    PingBack from http://paidsurveyshub.info/story.php?title=have-data-will-travel-business-logic-in-the-database