Load Balancing or Load Sharing – What is it all about?

 Load Balancing optimizes network and security scalability and processing capability 

of appliances by allowing the pooling of tools to inspect more traffic.

Such powerful functionality allows matching of traffic data rates to the number and capacity
of in-place NGFW, IDPS, NPMD  platforms, thereby simplifying network monitoring and security architecture design.

Use Cases - Optimizing your Cybersecurity and Monitoring Tools

Preventing Performance Erosion

If the amount of traffic that your network and security tools (firewalls,WAF, IPS, IDS,DLP, NPMD etc.) need to process, is more than the capacity of a single tool, then it’s often more cost effective to deploy multiple lower capacity tools than to change to a single high-cost tool. By load balancing, you can distribute the traffic between multiple tools, to share the traffic and processing load.

Pay-as-you-Grow Deployment

Load balancing traffic and processing load reduces your total cost of ownership by enabling you to expand CAPEX and OPEX only on the tools that you need.

 

As traffic increases, you can connect more cybersecurity and network monitoring tools, with safety and confidence.

Resilient Carrier-Grade Design

Traffic processing can be distributed to support more advanced redundancy schemes such as n+1.

For example, in  3+1 use case, traffic is distributed among 4 network tools, and in case one of them fails, the traffic going to the failed tool is redistributed to the remaining tools. This allows for one network or security tool to fail (or be taken down for maintenance) without affecting the security or monitoring services used on the network.

Upgrade Avoidance

When you upgrade your network infrastructure from 1Gb to 10Gb, or from 10Gb to 40Gb or 100Gb, you can postpone, and in some cases preclude, the need to also upgrade your network tools by using load balance configurations.

 

 

 

 

     All use cases result in big benefits in operating efficiency and reduced CAPEX costs

Niagara Networks solutions process Terabits of traffic in 100Gb, 40Gb, and 10Gb increments and feed the right data to the right tool with optimized load balancing and continuous levels of resilience.

network migration

LB-SVC-chaining-1-1

 

Considerations for Load Balance Implementation

When implementing a load balancing mechanism in your network architecture, here are some of the questions that should be be addressed:

  • The How and the What of load balancing

    What is being balanced and how (and what are the load balancing rules)?

  • Load balancing combinations (with filters or network TAPs)

    Should the load balancing mechanism be combined with filtering to achieve better results? For example, first filter by the HTTP traffic, and then only load balance that filtered traffic?

  • Failure detection and recovery

    Load balancing mechanism is an end to a means and not an end in-and-of-itself. Since multiple network tools participate in the load balancing process, the network manager should look at the implementation as a whole:

    • How do I detect that a network tool participating in the load balancing has failed?
    • What do I do with the traffic that was processed by the failed tool?

    And of course, how to handle the reverse, recovery process: detecting that the network tool is back online, and what traffic if any to send to it? Also, whether any of these processes should be automatic or only enabled by user intervention?

    ❕For example, in case of network tool failure, you may want to automatically re-distribute traffic to the remaining tools, or to perhaps decide not to handle that traffic, because the remaining tools cannot handle the extra capacity, and diverting more traffic to them will reduce their performance efficiency. This decision may also be impacted by whether the load balance mechanism is for in-line cybersecurity tools or for out-of-band monitoring tools. In case of inline tools being load balanced, you may prefer to send the traffic that was going to the failed tool, to be rerouted (bypassed) to the network.

The Niagara Networks Advanced Solution

At Niagara Networks, we understand that your ability to quickly and easily implement an efficient and flexible load balancing policy can be the key to a successful deployment of cybersecurity & monitoring tools on the visibility layer.

Sophisticated load balancing capabilities are available as standard on all Niagara Network Packet Brokers and at all network interface speeds.

Flexible Load balance configurations

Flexible Load Balance Configurations

A load balance policy can be configured per device. This means that any group of ports and multiple groups of ports can be assigned this same device policy. In addition to the device policy, a separate custom policy can be defined, allowing multiple different load balance policies to be supported at the same time.

 

Given the increasing port density of Network Packet Brokers and the increasing support of different speeds and feeds ranging from 1/10Gb to 40/100Gb, the ability to support concurrently different load balance policies affords the network manager and security architect unparalleled flexibility.

User friendly

User Friendly

Implementing and configuring load balance flows, as well as the desired action in case of an appliance failure in the load balance port group can be a very complex, error-prone, and time-consuming endeavor.

 

At Niagara Networks, options are available with a click of a button, and setting-up complex load balance implementation on inline appliances is made easy and intuitive.

 

 

Learn More About N2 Series 

 

Integrated approach

Integrated Approach

At Niagara Networks, we understand that setting up a successful load balance implementation is not only about the policy on what parameter to use in load balancing the traffic. In an integrated approach, we also enable the definition and setup of heartbeat packets.

 

Heartbeat packets are proactively generated on the connected appliance port to determine its availability.

In an integrated approach, we also enable the definition of whether to re-distribute traffic to the remaining available appliances in the port group. In case of failure, traffic will be automatically re-distributed without manual user intervention or setup, thus increasing service availability and optimizing resource usage and uptime.

Load Balance Key Features

  • Multiple load balance policies per device

    Combine device and multiple custom policies in a platform

  • Heartbeat packets

    - Bi-directional
    - Monitoring and Inline ports
    - Interval settings

  • Automate actions in case of appliance failure

    - Port bypass
    - Traffic distribution
    - Traffic re-distribution

  • Intelligent Policy-Triggered Actions

    - Includes traffic- steering, load balancing, routing traffic via different ports, deactivating links, performing fail-over of traffic, and more

  • Load balance on traffic headers

    - Multiple headers support 
    - XOR option between selected headers

Network Visibility

White Paper

Defining the Future of Network Visibility Fabrics

Today, enterprises are utilizing the advanced features found in superior NPBs. investing in additional solutions such as public cloud and virtual fabric technologies is not only the norm, it’s a necessity.

Download our white paper for an in-depth look at current usage and emerging best practices for network visibility fabrics and network packet brokers.