Freigeben über


Why you should NOT optimize your backup strategy for backup speed

Sounds kinda funny, doesn't it?  I mean, who here doesn't have someone breathing down your neck every time you go past your backup window?  So why would you NOT optimize for fast backups?  It just makes sense.  You do backups every day, so they should be optimized.

Two words:

Disaster Recovery

If you think people get cranky when you go 20 minutes past your backup window, try taking 2 days to get your mission-critical database back on the air! That kind of attention is not pleasant. 

What I'm trying to get across is that you really want to think through your DR strategy from the standpoint of a recovery plan instead of a backup plan.  Plan exactly how you will recover in different scenarios.  Work out the most efficient way to get back in the air from every situation you can think of.  Then work back and make sure that you have a backup strategy that gives you the building blocks you'll need to accomplish those plans.

That's not to say that you can't have both; It is entirely possible to come up with a plan which minizes both backup and restore time.  You just have to plan it that way.  The extreme example is the person who does a full backup once a month, and LOTS of log backups in between.  Log backups are quick, right?  So, the backup strategy is optimal, right?  But it would be a nightmare applying all of those log backups to recover from a disaster.

With SQL Server 2005 Enterprise, there are lots of features that allow higher availability and faster recovery, and they should be looked into.  One Example:

Partial Database Availability This is huge!  In a nutshell, this feature means that as soon as your primary filegroup is online, your database is available.  Obviously, only those filegroups which are online can be accessed, but at least you can work with whatever is online.  What this looks like in a DR situation is this:

  • Restore and roll forward your primary filegroup, and the minimal set of additional filegroups required to get your most important app running.
  • At this point, your most critical customers are already online.
  • Then, at your leisure, restore/roll forward additional filegroups containing less critical data such as historical records.

In this way you can get back on the air far quicker than what it would take to restore your entire 2TB database.  Unless you plan ahead and break your database up into appropriate filegroups, this won't work for you.

The point to all this is to encourage you to plan and rehearse disaster recovery before the lights go out and everyone is in headless-chicken mode.  Rational decisions are rarely made in a panic.

 

Kevin Farlee

Comments

  • Anonymous
    June 19, 2006
    Good points indeed.. A lot of folks don't think like that until it's too late.. Then it's resume time.. You guys are all giving some great info with this particular blog.. You need to show the other teams this and get them to either update theirs more or start one up <g>..
  • Anonymous
    June 19, 2006
    Very useful, thanks. What I think would be helpful is to be able to manage partitioning in Management Studio, for example to examine a database view it's size, then have a management gui which easily allows you to partition a large table or tables on a specific key to make archiving across filegroups easier. This could then be integrated into a gui which provides for easy backup and recovery of this partitioned/archived data, pulling in some of the snapshot stuff. Seems to me that the GUI tools are a very lightweight in this area. Oracle, for example, is moving ahead on the ease of use in some of these areas.
  • Anonymous
    August 23, 2006
    You most likely won't do this for end user performance reasons.&amp;nbsp;&amp;nbsp;There are some interesting...