Partager via


Load Balancers with Live Communications Server 2005

You know without getting all technical about this blog, I thought I’d just talk a little bit about “supported” load balancers for Microsoft Office Live Communications Server 2005. Often the question from customers becomes, “Is my particular load balancer supported for your product?” Well… the product group didn't specifically test the Barracuda 240 or the Netscaler hardware load balancer or any Cisco devices but that doesn’t mean they aren’t supported. Foundry Networks, F5 Big-IP and Webmux load balancers were tested because those were available to us at the time… But if a hardware load balancer is capable of being configured for SNAT (Source Network Address Translation) or DNAT (Destination Network Address translation), it can be configured for IP (Layer 2) forwarding and it is placed in a supported topology, then it should work just fine and would be supported. Whoa, that’s a bit of techy stuff to chew on… I’ll explain.

Load balancing is based on various forms of network address translation (NAT) at layers 2 and 3 of the networking stack. You may also know that network topologies can be built using different NAT modes. You with me so far? OK then… load balancers can be connected in two LCS topologies, as an independent node on the network or as an intermediary device between the Front End servers and the rest of the network. The first type of connection I mentioned is referred to as a one-arm topology while the latter is referred to as a two-arm topology. The product group supports only Front End servers with a single network interface card with a single IP address. After you tell them this, here’s where customers go off and ask about configs that aren’t supported. Configurations involving two or more IP addresses and/or two or more network cards per Front End server could potentially work but they are not tested and not supported by Microsoft. No support cases, no code changes if issues are discovered.

The three possible forms of NAT used for load balancing are below. But only two NAT modes are supported for LCS, DNAT (Destination NAT) and SNAT (Source NAT) and there’s two topologies that work with those NAT configs. For Destination NAT (or half-NAT), two-arms and one or two IP subnets behind the load balancer is supported or one arm and two IP subnets is supported. For Source NAT (or full-NAT), two-arms and one or two IP subnets is supported or one arm and one or two IP subnets is supported. Mouthful? Confusing? Not really if you whiteboard the two supported NAT modes and write the supported topologies underneath. The NAT mode that does not work for the LCS 2005 architecture is Direct Server Return, or commonly referred to as “out-of-path mode”. This configuration will not work for the server management traffic using DCOM over TCP port 135 and that’s why it’s not supported. Otherwise you couldn’t open the admin tool and administrate the Enterprise Edition pool through the load balancer.

      Below is a concise list of load balancer connectivity requirements for setting up a LCS 2005 pool:

  1. The load balancer MUST expose an ARP-able Virtual IP Address (VIP)
  2. The load balancer MUST provide TCP-level affinity. This means that the load balancer must ensure that TCP connections established with one Front End server will guarantee that all subsequent traffic on that connection will go to the same Front End.
  3. TCP idle timeout should be set to 20 minutes
  4. Front Ends within a pool MUST be capable of routing to each other. There can be no NAT device in this path of communication. Any such device will prevent successful intra-pool communication over RPC.
  5. Front Ends MUST have access to the Active Directory environment.
  6. Front Ends MUST have static IP addresses that can be used to configure them in the load balancer. In addition, these IP addresses must have DNS registrations (referred to as Front End FQDNs).
  7. Administration machines MUST be able to route to both the Pool FQDN as well as the Front End FQDN of every Front End in the pool(s) to be managed. In addition, there can be no NAT device in the path of communication to the Front Ends. Again, this is a restriction enforced by the usage of the RPC protocol by DCOM.

So that covers the load balancer requirements, the supported network requirements and topologies and the NAT modes that are supported. But you should also review our public documentation here: https://office.microsoft.com/en-us/FX011526591033.aspx and read the whitepapers available for the three hardware load balancers that were tested. From those docs you should get a good overview and feel for how to configure your hardware load balancer.

https://www.cainetworks.com/papers/webmux/lcs2005.htm

https://www.f5.com/solutions/deployment/index_lcs.html

https://www.foundrynet.com/pdf/wp-msft-live-com-server-load-bal.pdf

- Stu Osborn

Program Manager

Comments

  • Anonymous
    January 01, 2003
    Load balancing is based on various forms of network address translation (NAT) at layers 2 and 3 of the