Share via


If you still have servers in all of your branches, think again

If you have large distributed environments, you will have connected branches to your headquarters. More than a decade ago, these links had small bandwidth (around 64kbps) or even used X.25 like some of my customers. Generally links were unreliable and had a tendency to malfunction from time to time. Using backup lines were either prohibitively expensive or alternative technologies were in their infancy to be used reliably. Back then you needed servers in your branches and use caching on those servers so that you can resume your work in case your link goes down. Some of my customers had (and some still have) teams monitoring all the links (some over 1000 locations) and working with the ISP to resume service on some of them. My customers used to have large number of sites in Active Directory and file servers running on branch servers. You also needed backup software and tape drives on those machines to do local backup. When you work in these environments for some time you tend to attain a habit of keeping whatever you have and this blurs your vision of connectedness.

During the last decade, link speeds and reliability have gone up considerably. You can use 3G wireless backup lines for your primary lines and link speeds have reached 1-5 Mbps for most of the places. Your mileage may vary but the point is link speeds have gone up at least 20 times (my home Internet connection speed has increased 40x times in this period) and you can attain high available lines with much less effort combining different technologies. Not only can you use higher bandwidth to connect your branches but you can have a different topology as well. Think of this as a slider where each point will enable different functionality as you increased your connected bandwidth. If you slightly increase your line bandwidth you can start taking backups from central location during nights or you can remove branch servers from your smaller branches. I did an analysis several years ago for one of my customers around what the optimal number of PC’s in branches need be to make it feasible to put branch servers. I included operational link costs, initial cost of the servers and an estimated maintenance cost for servers and came up with a magical number of 14. If branch had less than 14 PC’s customer placed no branch servers but serviced PC’s from central site instead. Of course your magical number may vary on your own conditions however the point is, the more you feel comfortable with the links the fewer servers you will need in branches.

There are organizations that have created their topology over a decade ago and have not changed it since. Some still fear of unreliable links and keep Exchange servers in their branches. (One specific customer of mine has over 600 Exchange servers) Exchange Server is designed to be placed in central sites for the last two versions at least and it’s getting harder to deploy it in branches with each new version. Some customers refuse to use read only domain controllers (RODC) on the basis of the extra load it brings to the network. It may not be feasible to remove every branch server in your environment, however if you still have branch servers in all of your branches it is time to reconsider your server placement strategy.

There is no point in trying to upgrade your software if you do not adapt yourself to the new perception of connectedness. Some of my customers are already using VPN over Internet between their central sites and branches and have reduced their branch servers with a goal of reaching down to a dozen locations that will have servers. Looking into the near future, we will be using IPSec VPN’s over IPv6 Internet for all of our client machines without even knowing which of branch servers is closest to you, so start getting ready now.