Where each incoming packet or flow is sent to the next available port in the load balance group. In round robin you typically cannot ensure that traffic from the same source/destination will get to the same processing appliance.
Traffic is distributed based on the selected header component. The hash ensures that all header possibilities will be distributed, and that a given specific header will always be served by the same connected appliance.
Weighted least traffic - the load balancer monitors the bitrate on each output port of the load balance port group, and then distributes traffic to the one that has the least load of outgoing traffic.
Methods used in load balance on traffic characteristics are typically based on the header. By specifying the desired header, traffic will be load balanced on the dynamic values found in the header in the traffic, without the need to explicitly define a header value. This is achieved by hashing the header value and distributing the traffic based on the hash among the ports in the loadballance group. For example, to load balance according to the IP address, where traffic is automatically distributed by the traffic’s IP address. You can also load balance by specific IP addresses. This is achieved through a filtering mechanism, and while the end result is load balanced traffic - this latter method is not strictly considered a load balancing policy.
Methods used in load balancing on traffic flow, typically involves “counting” traffic going to the load balanced output ports and ensuring that the desired traffic volume distribution is maintained.
In this method, the load balance is based on the XOR hashing of the source and destination. This method is useful in ensuring that the same session flow will always reach the same network tool. Whether it’s a flow from A to B or from B to A. This method achieves stateful-like results often required by security tools.
In this method, the load balance can be specified on a certain header in case the packet has multiple headers. This method is useful in the case of tunneling protocols where the user may need to load balance on the inner IP or the outer IP, or when handling GTP traffic.
In this method, the user may need to load balance on the IP address, but only wants the most significant bytes in the header to participate in determining the load balance criteria. Here, the user has better granularity and control over the desired outcome that he wants to achieve.
Load balancing increases the efficiency of both Security inline (inband) and Monitoring tap (out-of-band) network visibility deployments.
The figure depicts bi-directional traffic from the network N1 load balanced across 4 inline appliances and going back to the network N2. The actual traffic flow is only shown for inline appliance 1, though is taking place for each of the inline appliances. Note that the inline appliances are load balancing the traffic so that each inline appliance is getting a different subset of the traffic.
The figure depicts bi-directional traffic from the network N1 to network N2 that is being tapped. The traffic load is load balanced across 4 monitoring appliances. The actual traffic flow is shown for monitor appliance 1 (with Rx traffic from both sides), though is taking place for each of the monitor appliances. Note that the monitor appliances are load balancing the traffic so that each monitor appliance is getting a different subset of the traffic.
When implementing a load balancing mechanism in your network, here are some of the questions that should be be addressed:
What is being balanced and how (and what are the load balancing rules)?
Should the load balancing mechanism be combined with filtering to achieve better results? For example, first filter by the HTTP traffic, and then only load balance that filtered traffic?
Load balancing mechanism is an end to a means and not an end in-and-of-itself. Since multiple network tools participate in the load balancing process, the network manager should look at the implementation as a whole:
And of course, how to handle the reverse, recovery process: detecting that the network tool is back online, and what traffic if any to send to it? Also, whether any of these processes should be automatic or only enabled by user intervention?
For example, in case of network tool failure, you may want to automatically re-distribute traffic to the remaining tools, or to perhaps decide not to handle that traffic, because the remaining tools cannot handle the extra capacity, and diverting more traffic to them will reduce their performance efficiency. This decision may also be impacted by whether the load balance mechanism is for in-line security tools or for out-of-band monitoring tools. In case of inline tools being load balanced, you may prefer to send the traffic that was going to the failed tool, to be rerouted (bypassed) to the network.
At Niagara Networks we understand that your ability to quickly and easily implement an efficient and flexible load balancing policy can be the key to a successful deployment of security and monitoring tools on the Visibility Layer. Sophisticated load balancing capabilities are available as standard on all Niagara Network NPB.
A load balance policy can be configured per device. This means that any group of ports and multiple groups of ports can be assigned this same device policy. In addition to the device policy, separate custom policy can be defined, allowing multiple different load balance policies to be supported at the same time. Given the increasing port density of Network Packet Brokers and the increasing support of different speeds and feeds ranging from 1/10Gb to 100Gb, the ability to support concurrently different load balance policies affords the network manager and security architect unparalleled flexibility.
Implementing and configuring load balance flows, as well as the desired action in case of an appliance failure in the load balance port group can be a very complex, error prone and time consuming endeavor. At Niagara Networks, options are available with a click of a button, and setting-up complex load balance implementation on inline appliances is made easy and intuitive.
At Niagara Networks we understand that setting up a successful load balance implementation is not only about the policy on what parameter to use in load balancing the traffic. In an integrated approach we also enable the definition and setup of heartbeat packets. Heartbeat packets are proactively generated on the connected appliance port to determine its availability. In an integrated approach we also enable the definition of whether to re-distribute traffic to the remaining available appliances in the port group. In case of failure, traffic will be automatically re-distributed without manual user intervention or setup, thus increasing service availability and optimizing resource usage and uptime.
Combine device and multiple custom policies in single device
Monitoring and Inline ports
Multiple headers supports L2-L4
XOR option between selected headers
Network Visibility is the fundamental element to ensuring optimal network performance in the face of growing network complexity and data loads.
Download our white paper on Network Visibility and gain deeper insight and understanding of the importance and implementation of visibility devices throughout your network and learn how to eliminate blind spots.