Partager via


3 into 4 can go…

Once you’ve determined the number of Exchange Server 2010 mailbox database copies you’re going to use it makes sense to deploy a complimentary number of mailbox servers in the DAG.  If you decide to go with an odd number of database copies go with an odd number of nodes; if you use an even number of copies aim to deploy an even number of nodes - it’ll make your life easier in the long run.  ..but that could well mean you need to make a decision over ease of management versus cost.  Surely there will be cases where it makes more financial sense to deploy 4 servers to host your 3 copies rather than 6 for example?  …but what are the implications?

Well it’s not a problem to get it to work.  It just means you’re likely compelled to introduce a layer of complexity into the solution.

To illustrate this I’ll use an example where there are 4 nodes of a single DAG with 3 copies of every mailbox database and to make it simple I’ll be deploying a single database on each disk (JBOD), using volume mount points.

What’s the problem?

On every DAG member you create a directory structure; c:\data\mbxdb01, c:\data\mbxdb02…up to c:\data\mbxdb06 (since you have 6 disks).  You deploy mbxdb01on nodes 1, 2 and 3 since but now on node 4 you have c:\data\mbxdb01 which you can’t use because each database has to have the same path – so c:\data\mbxdb01 can only be used by mbxdb01.

Disk01 c:\data\mbxdb01 c:\data\mbxdb01 c:\data\mbxdb01

c:\data\mbxdb01

At first glance you have 2 choices – don’t use the disk (!$@&?!) or change your design to 4 copies (equally unpalatable).

One Solution

..but there is a way around this with this particular example and that is to deploy a directory structure which is different on each node.

Disk01

c:\data\mbxdb01

c:\data\mbxdb01

c:\data\mbxdb01

c:\data\mbxdb02

Disk02

c:\data\mbxdb03

c:\data\mbxdb02

c:\data\mbxdb02

c:\data\mbxdb03

Disk03

c:\data\mbxdb04

c:\data\mbxdb03

c:\data\mbxdb04

c:\data\mbxdb04

Disk04

c:\data\mbxdb05

c:\data\mbxdb05

c:\data\mbxdb05

c:\data\mbxdb06

Disk05

c:\data\mbxdb07

c:\data\mbxdb06

c:\data\mbxdb06

c:\data\mbxdb07

Disk06

c:\data\mbxdb08

c:\data\mbxdb07

c:\data\mbxdb08

c:\data\mbxdb08

The above design works very nicely I think (..and of course the creation of this directory structure can be scripted using diskpart for example) but it does mean that you need to be careful when you lose a disk for example or when you are rebuilding a failed server (again both scriptable with the right skills).  Or perhaps more importantly when you decide to add a node to the design at some point in the future.  It could mean quite a lot of rejigging and downtime – the worst case might be changing the directory structure across all servers in the DAG.  (…ouch?!)

Managing a large DAG is easy?

I’m of the opinion that one of the areas of Exchange Server 2010 which needs time and effort to get right is managing large DAG’s with swarms of databases housing multitudes of big mailboxes.  You’re going to have to get good at scripting to make things run smoothly.  Scripts like the Exchange 2010 Database Redundancy Check Script will make or break a big deployment and should prevent a beautiful design descending into chaos.  ..and the more straightforward the design the more straightforward its management will be in most cases.

So 3 into 4 can go  …but proceed with caution.

Comments

  • Anonymous
    June 17, 2010
    I think you're making this more complex for yourself than necessary by looking at it from a disk point-of-view instead of a database/DAG point-of-view. What's wrong with just continuing the numbering, e.g. Node1 Node2 Node3 Node4 DAG1 DAG1' DAG1' DAG2 DAG2' DAG2' DAG3 DAG3 DAG3' DAG4 DAG4' DAG4' (pattern repeats hereafter) Then, create mount points only for the DAGs hosted on a node, e.g. C:data<DagX>. Goal is to distribute the active copies over the number of nodes / disks, passive copies follow.

  • Anonymous
    June 18, 2010
    I'm not sure I understand...  When you refer to DAG's you must mean databases..?  Still means that the volume structure is different on each node..?  You have just made the way that you stripe them a little nicer.  Is that right?

  • Anonymous
    June 19, 2010
    I just wondered why you came up with that scheme as a solution. Perhaps you had a customer who did reserve a drive for a DAG membership it was never going to utilize ?

  • Anonymous
    June 22, 2010
    The way the databases are striped is kind of immaterial to the point I was trying to make (my fault for not making it clear). I just wanted to make the point that if you do have to lever an odd nuber of copies across an even number of nodes you might have to deploy DAG members with different directory structures - easy to do but makes your design more complex which in turns makes the solution more difficult to manage....  Hope we are not talking at cross-purposes.... Thanks for your comments.