Skip to main content
NetApp Knowledge Base

How does load balancing on a VIF work?

Last Updated:


Applies to

Data ONTAP 7
Data ONTAP 8


Overview of VIF/ifgrp types

NetApp implementation of trunking is called VIFs. The theory behind this is dependent on the type of VIF that you setup. The three types of VIFs (called ifgrps in Data ONTAP 8 and later) are:

  • Single-mode - No load balancing, strictly failover
  • Multi-mode - Load-balancing
  • LACP - Load-balancing, with low-level connectivity verification
Standard single-mode VIF/ifgrp

If the physical link is broken on one interface, network sessions will persist and failover to another interface in the VIF (up to 16 interfaces can be in a VIF).
This is the simplest VIF implementation since there is no switch configuration.

Note: Single VIFs can spread over different switches as they do not require specific switch configuration. However, all ports of a multi VIF have to be connected to a single switch. The support of this is dependent on the switch, but most switches do not support multi VIFs to be connected to different switches.

Standard multi-mode VIF/ifgrp

A multimode VIF/ifgrp allows for all member interfaces to be used for traffic.  This mode allows for failover, as well as increasing the potential aggregate bandwidth available for multiple connected hosts or TCP streams.

One common misconception, however, is that all of the "load-balancing" algorithms available will allow a given single host to take advantage of the total aggregated bandwidth.  

In other words, if a multimode or LACP VIF/ifgrp is created that contains four 1Gbps member interfaces, it is sometimes assumed that a host that is capable of taking advantage of 4Gbps (perhaps it has a 1Gbps Network Interface Card installed) would be able to achieve up to 4Gbps when communicating with the four-member vif/ifgrp.  In reality, each host (or TCP/UDP stream in the case of "-b port") can only take advantage of one member interface.  In this example, the maximum bandwidth available to a given host or TCP/UDP stream would be 1Gbps.  A different host or stream, however, would have the opportunity to use a different member interface, reducing port usage and contention.

Therefore, the VIF/ifgrp would allow for multiple hosts or streams to send and receive up to 4Gbps per second to this Storage Controller, but each host or stream would only be able to achieve 1Gbps.

This behavior is not unique to NetApp, and is a basic principle of 802.3ad link aggregation used in modern switches.

LACP VIF/ifgrp

LACP VIFS/ifgrps are functionally identical to multimode VIFs/ifgrps, with one exception.  LACP adds 802.3ad link fault detection using the Link Aggregation Control Protocol (LACP).  LACP does not change how load distribution functions, it only adds a layer of fault detection to the VIF/ifgrp

Using LACP, the physical interface member (e.g. physical port) of the VIF/ifrgrp will validate that it can exchange low level protocol data units (PDUs') with the switch port to which it is directly connected.  If these PDU's cannot be exchanged, the member interface is removed from participation in the VIF/ifgrp.  LACP can only be configured on a first level VIF/ifgrp, as it is only possible to exchange PDU's between physical interfaces (physical ports) and the ports on the switch.

Load balancing algorithms

There are 4 types of load-balancing for multimode and lacp vifs/ifgrps:


Traffic balancing is based on the IP address of the source. This is the default, and will allow each host to use up to one member interface within a VIF/ifgrp.


Traffic balancing is based on the MAC address of the source (which is usually every time the same in a routed environment - the router), and will allow each host to use up to one member interface within a VIF/ifgrp.

Round Robin

The outgoing interface that the ethernet packets will leave through cannot be predetermined. Transmitted responses will use the member interfaces in a Round Robin fashion.  This is the only method that will ensure true loadbalancing under adverse conditions like a single or multiple top-talking hosts with badly distributed IP- and/or MAC-addresses.
NOTE: In Data ONTAP 7.3.1, support for round robin is no longer available.
NOTE: Round Robin load balancing can result in out of order packet delivery due to minute variances in the delivery path.  This can increase TCP Duplicate ACKs and retransmissions.  Therefore, some protocols and applications will not perform acceptably well when used in conjunction with Round Robin load balancing.


Traffic balancing based on source and destination IP addresses along with the transport layer port number. Also referred to as IP+port. Available in Data ONTAP versions 7.3.1D3 and post 7.3.2.  Because this choice adds the TCP/UDP port number to the calculation, a single host can have multiple TCP streams to the Storage Controller, each stream using a different member interface within the VIF/ifgrp.  This can allow for bandwidth usage that is higher than the throughput possible in a single member interface.

ONTAP load balancing implementation

There is some confusion as to how IP and MAC based load-balancing work.  The algorithm used to balance traffic across the member interfaces in a multimode or LACP vif/ifgrp changed in Data ONTAP 7.3.2.

Data ONTAP releases after (and including) 7.3.2:

As of Data ONTAP 7.3.2, a multimode or LACP ifgrp/vif uses an implementation of a "SuperFastHash", utilizing the last 16 bits of the source and destination IP addresses (-b ip), the last 16 bits of the source and destination MAC addresses (-b mac), or the last 16 bits of the source and destination IP addresses in combination with the source and destination TCP port (-b port). 

The output of the algorithm results in a far more dynamic, more balanced distribution than the algorithm used in versions of Data ONTAP prior to 7.3.2.  The result is still the same, however, in that each TCP stream will associate with only one interface, allowing for only one port's worth of bandwidth per TCP stream. 

Data ONTAP releases prior to 7.3.2:

Documentation  prior to 7.3.2 states this as the formula:

((source_address XOR destination_address) % number_of_links)

Example of the above formula:

IP address on the VIF:
Number of interfaces in the VIF: 4 (interfaces numbered 0,1,2,3)
Balancing type: IP

Test Machine:
IP address:

When a network request is made from the test machine, the filer will look at the last octet of the source and do an XOR on the last 4 bits to determine which interface the traffic will go back out.

last octet of source is : 00000110
last octet of destination: 00001111
XOR equiv: 00001001 which equals 9 (last 4 bits)
9 divided by 4 equals 2 with a remainder (modulus) of 1. This means the return traffic will go back out the number 1 interface.

Here is a test. The setup is as follows:

FilerA: IP address on the VIF:
Number of interfaces in the VIF: 2 (interfaces numbered 0 and 1)
Balancing type: IP

Test Machine:
IP address:

From the test machine I:

  • Mounted filesystem via NFS
  • Created a 10M file via mkfile

The vif stat looked like the following during the operation:

f840-rtp2> vif stat mvif1 1
Virtual interface (trunk) mvif1
e3b e3a
Pkts In Pkts Out Pkts In Pkts Out
42822 19395 42040 20131
1 0 1 0
1 0 1 0
0 0 0 0
2 0 2 0
2 0 2 0
4 0 4 0
0 0 0 0
3 0 3 0
2 0 2 0
3 0 3 0
0 0 7372 3849
0 0 1 0
1 0 1 0
1 0 1 0
2 0 2 0

The math is as follows:
last octet of source is : 10110101
last octet of destination: 11001101
XOR equiv: 01111000 which equals 8 (last 4 bits)
8 divided by 2 equals 4 with a remainder(modulus) of 0. This means the return traffic will go back out the number 0 interface which is e3a.

Setup for VIF test

  • Create multi-mode vif on filer (annotate the interfaces): 
    vif create multi VIF_NAME interface_1 interface_2
  • On switch create etherchannel group of switch ports that the filer is connected to: 
    set port channel Blade/Interface_list mode desirable
  • Verify port channel is setup: 
    show port channel
  • Assign IP address to the vif: 
    ifconfig VIF_NAME IP_ADDRESS netmask XXX.XXX.XXX.XXX up
  • Make sure filer is exporting a filesystem via NFS.
    1. Ping the filer's VIF from a UNIX test client.
    2. Mount a filesystem derived from the filer using the VIF's IP address.
    3. Open a telnet window to the filer and do a "vif stat VIF_NAME 1"
    4. Change directory to mount point.
    5. Take the last octet of test machine and filer and perform XOR. Add up the last 4 bits. Annotate that value.
    6. Divide the previous value by the number of interfaces, the remainder will be the number of the interface the traffic will go out.
    7. Do "mkfile 10m testfile", monitor the telnet window and watch which interface has the IN/OUT traffic.

Additional Information



NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.