Load Balancing – What is it all about?

Load balancing is a mechanism that expedites distributing an input to multiple output ports according to various algorithms.
Load balancing can be used to benefit both out-of-band deployments where the network tools are used to monitor the traffic without affecting its flow, and in-band or inline deployments where the network tool is able to affect the traffic passing through.
Thus, the load balance mechanism supports multiple use cases and results in big end-user benefits in operating efficiency and reduced capex costs.

Use Cases

Preventing performance erosion

If the amount of traffic that your network tools (firewalls, NPMD, IPS, IDS, etc.) need to process, is more than the capacity of a single tool, then it’s often more cost effective to deploy multiple lower capacity tools than to change to a single high-cost tool. By load balancing, you can distribute the traffic between multiple tools, to share the traffic and processing load.

Pay-as-you-grow deployment

Load balancing traffic and processing load reduces your total cost of ownership by enabling you to expend CAPEX and OPEX only on the tools that you need.  As traffic increases, you can connect more network tools, with safety and confidence.

Resilient network design

Traffic processing can be distributed to support more advanced redundancy schemes, for example 3+1. In this case, traffic is distributed among 4 network tools, and in case one of them fails, the traffic going to the failed tool is redistributed to the remaining tools. This allows for one network tool to fail (or be taken down for maintenance) without affecting the security or monitoring services used on the network.

Upgrade avoidance

When you upgrade your network infrastructure from 1Gb to 10Gb, or from 10Gb to 40Gb or 100Gb, you can postpone, and in some cases preclude, the need to also upgrade your network tools by using load balance configurations.

Various algorithms are used for load balancing.
The most common are as follows:

Round Robin

Where each incoming packet or flow is sent to the next available port in the load balance group. In round robin you typically cannot ensure that traffic from the same source/destination will get to the same processing appliance.

Header Hash

Traffic is distributed based on the selected header component. The hash ensures that all header possibilities will be distributed, and that a given specific header will always be served by the same connected appliance.

Least Traffic

Weighted least traffic - the load balancer monitors the bitrate on each output port of the load balance port group, and then distributes traffic to the one that has the least load of outgoing traffic.

Various algorithms are used for load balancing.
The most common are as follows:

We can see how load balancing will result in increased resource effectiveness, and reduced costs. As a result, network managers can maximize the use of existing (or soon to be purchased) lower rated, less expensive equipment to deal with increasing amounts of traffic. In addition, load balancing facilitates the efficient growth and scalability of the network, as traffic loads increase over time.

Methods of Load Balancing

Load balancing can be performed on traffic characteristics or on the traffic flow.

Traffic Characteristics

Methods used in load balance on traffic characteristics are typically based on the header. By specifying the desired header, traffic will be load balanced on the dynamic values found in the header in the traffic, without the need to explicitly define a header value. This is achieved by hashing the header value and distributing the traffic based on the hash among the ports in the loadballance group. For example, to load balance according to the IP address, where traffic is automatically distributed by the traffic’s IP address. You can also load balance by specific IP addresses. This is achieved through a filtering mechanism, and while the end result is load balanced traffic - this latter method is not strictly considered a load balancing policy.

Traffic Flow

Methods used in load balancing on traffic flow, typically involves “counting” traffic going to the load balanced output ports and ensuring that the desired traffic volume distribution is maintained.

More advanced methods of load balancing involving packet header include:

Source and Destination

In this method, the load balance is based on the XOR hashing of the source and destination. This method is useful in ensuring that the same session flow will always reach the same network tool. Whether it’s a flow from A to B or from B to A. This method achieves stateful-like results often required by security tools.

Multiple Headers

In this method, the load balance can be specified on a certain header in case the packet has multiple headers. This method is useful in the case of tunneling protocols where the user may need to load balance on the inner IP or the outer IP, or when handling GTP traffic.

Header Byte Selection

In this method, the user may need to load balance on the IP address, but only wants the most significant bytes in the header to participate in determining the load balance criteria. Here, the user has better granularity and control over the desired outcome that he wants to achieve.

Deployments using Load balancing

Load balancing increases the efficiency of both Security inline (inband) and Monitoring tap (out-of-band) network visibility deployments.

Load Balance In-Line (in-band)

The figure depicts bi-directional traffic from the network N1 load balanced across 4 inline appliances and going back to the network N2. The actual traffic flow is only shown for inline appliance 1, though is taking place for each of the inline appliances. Note that the inline appliances are load balancing the traffic so that each inline appliance is getting a different subset of the traffic.


Load Balance Monitor (out-of-band)

The figure depicts bi-directional traffic from the network N1 to network N2 that is being tapped. The traffic load is load balanced across 4 monitoring appliances. The actual traffic flow is shown for monitor appliance 1 (with Rx traffic from both sides), though is taking place for each of the monitor appliances. Note that the monitor appliances are load balancing the traffic so that each monitor appliance is getting a different subset of the traffic.

Load Balancing-02

Network considerations for load balance implementation

When implementing a load balancing mechanism in your network, here are some of the questions that should be be addressed:

  • The How and the What of load balancing

    What is being balanced and how (and what are the load balancing rules)?

  • Load balancing combinations (with filters or taps)

    Should the load balancing mechanism be combined with filtering to achieve better results? For example, first filter by the HTTP traffic, and then only load balance that filtered traffic?

  • Failure detection and recovery

    Load balancing mechanism is an end to a means and not an end in-and-of-itself. Since multiple network tools participate in the load balancing process, the network manager should look at the implementation as a whole:

    • How do I detect that a network tool participating in the load balancing has failed?
    • What do I do with the traffic that was processed by the failed tool?

    And of course, how to handle the reverse, recovery process: detecting that the network tool is back online, and what traffic if any to send to it? Also, whether any of these processes should be automatic or only enabled by user intervention?

    For example, in case of network tool failure, you may want to automatically re-distribute traffic to the remaining tools, or to perhaps decide not to handle that traffic, because the remaining tools cannot handle the extra capacity, and diverting more traffic to them will reduce their performance efficiency. This decision may also be impacted by whether the load balance mechanism is for in-line security tools or for out-of-band monitoring tools. In case of inline tools being load balanced, you may prefer to send the traffic that was going to the failed tool, to be rerouted (bypassed) to the network.

The Niagara Networks Way

At Niagara Networks we understand that your ability to quickly and easily implement an efficient and flexible load balancing policy can be the key to a successful deployment of security and monitoring tools on the Visibility Layer. Sophisticated load balancing capabilities are available as standard on all Niagara Network NPB.


Load Balancing with Niagara Networks

Flexible Load balance configurations

Flexible Load balance configurations

A load balance policy can be configured per device. This means that any group of ports and multiple groups of ports can be assigned this same device policy. In addition to the device policy, separate custom policy can be defined, allowing multiple different load balance policies to be supported at the same time. Given the increasing port density of Network Packet Brokers and the increasing support of different speeds and feeds ranging from 1/10Gb to 100Gb, the ability to support concurrently different load balance policies affords the network manager and security architect unparalleled flexibility.

User friendly

User friendly

Implementing and configuring load balance flows, as well as the desired action in case of an appliance failure in the load balance port group can be a very complex, error prone and time consuming endeavor. At Niagara Networks, options are available with a click of a button, and setting-up complex load balance implementation on inline appliances is made easy and intuitive.

Integrated approach

Integrated approach

At Niagara Networks we understand that setting up a successful load balance implementation is not only about the policy on what parameter to use in load balancing the traffic. In an integrated approach we also enable the definition and setup of heartbeat packets. Heartbeat packets are proactively generated on the connected appliance port to determine its availability. In an integrated approach we also enable the definition of whether to re-distribute traffic to the remaining available appliances in the port group. In case of failure, traffic will be automatically re-distributed without manual user intervention or setup, thus increasing service availability and optimizing resource usage and uptime.

Load balance features

  • Multiple load balance policies per device

    Combine device and multiple custom policies in single device

  • Heartbeat packets

    Monitoring and Inline ports
    Interval settings

  • Automate actions in case of appliance failure

    Port bypass
    Traffic distribution
    Traffic re-distribution

  • Load balance on traffic headers

    Multiple headers supports L2-L4
    XOR option between selected headers

A Clearer View of Your Network

White Paper

A Clearer View of
Your Network

Network Visibility is the fundamental element to ensuring optimal network performance in the face of growing network complexity and data loads.

Download our white paper on Network Visibility and gain deeper insight and understanding of the importance and implementation of visibility devices throughout your network and learn how to eliminate blind spots.