Posted by: esa | September 21, 2011

Multimode VIF Guide on NetApp storage

This guide writing by Trey layton, and maybe useful for me or anyone…

A MultiMode VIF is NetApp ease for EtherChannel or Port Channeling.   Quite simply, the bonding of physical links into a virtual link, to which the virtual link utilizes a algorithm to distribute or load-balance traffic across the physical links.

The first subject to tackle is NetApp terminology versus Networking Industry Terminology.  At NetApp we tend to generalize MultiMode VIF into a all encompassing description of channeling.    Some of us refer to MultiMode VIFs as “trunked ports”.   This is an inaccurate description, yet I understand why the term is used.   When referring to a “trunked interface” the networking industry thinks of that as a VLAN trunked interface, utilizing a technology like 802.1q.   Therefore,  when I refer to the technology enabled by MultiMode VIFs I will always call the physical links EtherChanneled or Channeled interfaces.

The next thing we tend to do is never reference the other type of Multimode VIF, that is the “Dynamic MultiMode VIF”.  Many of you reading that term for the first time are going to say what is he talking about.   Take a look at our OnTap Network Management Guides for virtually any release and browse the section on MultiMode VIFs.  You will see two distinct types of MultiMode VIFS. The Static MultiMode VIF and the Dynamic MultiMode VIF.  These two different types of MultiMode VIFs are key differentiators and knowing what each is will help you in those conversations with the Networking team in your organization.

Static MultiMode VIF -  A static MultiMode VIF is a static EtherChannel.  A static channel is quite simply the static definition of physical links into the channel.  There is no negotiation or auto detection of the physical ports status or ability to be channeled.  The interfaces are simply forced to channel.

The Cisco command to enable a static etherchannel is channel-group (#) mode on

The NetApp command to enable a static MultiMode VIF is vif create multi

** Covered in detail in the templates below

Dynamic MultiMode VIF – A Dynamic MultiMode VIF is a LACP EtherChannel.   LACP is short for Link Aggregation Control Protocol and is the IEEE 802.3ad standard for port channeling.   LACP provides a means to exchange PDUs between devices which are channeled together.   In the case of the present topic that is a NetApp controller and Cisco switch.   The only difference between the two types are the use of PDUs to alert the remote partner of interface status.  This is used when one partner (lets say a switch) decides it is going to remove one physical interface from the channel, for reasons other than the link being physically down.  If the switch removes a physical interface from the channel, with LACP, a transmission from the switch to the partner (NetApp controller) is sent, providing notification that the link was removed. This allows the controller to respond and also remove that link from the channel thus not creating a situation where the controller attempts to continue to use that link, causing certain transmissions to be lost.   Static EtherChannels do not have this ability and if a situation like this occurs, the only means to remove the link from the channel is via configuration change,  cable removal or adminstrative shutdown of the port.

I provide the above distinctions because I find that many often interchange terms Static Multimode and LACP. This can produce problems in the configuration of the network to support the controllers so try to stick with the terminology above.

The next thing I often see is a conversation around which technology LACP VIFs or Static VIFs provide better load-balancing.   The truth his they are simply the same, there is no performance benefit provided by one over the other.  This generally leads to the topic of load-balancing as we often find that not everyone understands the mechanism provided by the current load-balancing algorithms for LACP and Static VIFs.   There are limitations to the technology and understanding those limitations are key to getting the most out of the deployment when utilizing them.

The Cisco command to enable a static etherchannel is channel-group (#) mode active,  channel-protocol lacp

The NetApp command to enable a dynamic MultiMode VIF is vif create lacp

** Covered in detail in the templates below

Load-Balancing in VIFs can utilize 1 of 3 choices of algorithms   IP, MAC and Round Robin.

Round Robin

I am personally not a fan of Round Robin load-balancing as I used this algorithm in the early 90s, when a majority of networking manufactures were first introducing EtherChannel based features.   This technology runs the risk of packets arriving out of order and has nearly been eliminated from most network manufactures equipment features, for that reason.   However,  there are still deployments in production which utilize this feature and they work without issue.  Round Robin essentially oscillates ethernet frames over the active links in the channel.  This provides the most even distribution of transmission but can produce a situation where frame 1 is transmitted on link 1 and frame 2 is transmitted on link 2,  frame 2 could arrive at the destination prior to frame one because of congestion experienced by frame 1 while in transit.   This would produce a condition where errors would occur and the protocol and application would need to facilitate a recovery which typically results in the frames being retransmitted.

Source and Destination Pairs

The load-balancing algorithm used in most NetApp MultiMode VIF deployments are detailed in the sections that follow but one thing they have in common is that they calculate the interface to be used by executing a XOR algorithm on source and destination pairs.  As source and destination pairs are compared they are ultimately divided by the number of physical links in the VIF.  This calculation equals a result which is matched to one of the physical interfaces.  It is important to understand this as many people assume that bonding 4 physical links together enables a speed equal to the sum of the links.  This is not true, the maximum speed that can be reached on a EtherChannel link is equal to the speed of one physical link in the channel, not the sum.  Utilizing an example of a connection which contains 4 1Gbps physical links bonded into a MultiMode VIF.  It is often assumed that this would equal 4Gbps of bandwidth to the controller.   It actually equals 4 – 1Gbps links to the controller.  A single transmission (source and destination pair) can burst up to the speed of one of the physical links (1Gbps).  No one single communication can exceed the 1Gbps speed.

The following sections will describe how the algorithms work.

NOTE: The algorithms defined herein are industry limiations and are the same no matter who is the manufacture.  Cisco has implemented a few additional algorithms but none get over the core limitations of not being able to exceed the speed of a given physical link in the channel.

MAC Load-Balancing

This is the least-common algorithm utilized because of conditions which produce the likelihood that traffic distribution will be weighted heavily to a single link.   The MAC based algorithm makes a XOR calculation on the source and destination pair of the MAC addresses in a communication.  The source would be the MAC address of the NIC card on the host connecting to the NetApp controller.  The destination MAC address would be the MAC address of the VIF interface on the NetApp controller.  This algorithm works well if the hosts and NetApp controller reside on the same subnet or VLAN.   When hosts reside on a different subnet, than the NetApp controller, we begin to exploit the weakness in this algorithm.  To understand the weakness you must understand what happens to a Ethernet Frame as it is routed through a network.

Lets say we want Host1 to connect to Filer1.

Host1′s IP address is 10.10.1.10/24 (Host1′s default router is 10.10.1.1)

Controller1′s IP address is 10.10.3.100/24 (Controller1′s default router is 10.10.3.1)

Above we have defined Host and Filer on two separate subnets.  The only way they can communicate with each other is by going through a router.  In the case of the example above, default router 10.10.1.1 and 10.10.3.1 are actually the same physical router, those addresses are simply two physical interfaces on the router.  The routers purpose is to connect networks and allow communication between subnets.

As Host1 transmits a frame destined for Controller1 it compiles a frame to its default router because it recognizes that 10.10.3.100 is an IP address not on it’s local network, therefore it forwards the frame to it’s default router so that it can be forwarded to that destination.

Host1 to Host1Router

-IP Source: Host1 (10.10.1.10)

-MAC Source: Host1

-IP Destination: VIFController1 (10.10.3.100)

-MAC Destination: Host1DefaultRouter

Host1Router Routing Packet to Controller1

-IP Source: Host1 (10.10.1.10)

-MAC Source: Controller1DefaultRouter

-IP Destination: VIFController1 (10.10.3.100)

-MAC Destination: VIFController1

NOTE:  The source and destination MAC addresses changed as the frame was forwarded through the network.  This is how routing works as routers exist between source and destination the MAC address can change multiple times.   How many times is not of concern but specifically when the frame is forwarded onto the local segment of the Controller.  The source MAC will always be the router and the destination MAC will always be the controller VIF.  If the source and destination pair is always the same then you will always be load-balanced to one link.  To fully understand how this creates a problem, lets say that we have a 4 – 1Gbps Etherchannel on the Controller1.   Lets also say that we have 50 other hosts on the same subnet as Host1.   The source and destination pair for Host1 to Controller1 is the exact same for every other host on host1′s network as the source and destination MAC address will always be Controller1DefaultRouter and VIFController1.

IP Load-Balancing

IP Load-Balancing is the default parameter for all NetApp MultiMode VIFs and is the most common type of MultiMode VIF in production today.  The algorithm is no different than the MAC algorithm defined above.  The difference is that we are using Source and Destination IP Addresses, which if you go back through the example above you will note that the source and destination IP addresses never change, unlike MAC addresses.  The fact that the IP addresses never change produces the scenario where you are more likely to have more unique pairs which will result in a more equal distribution of traffic across the physical links.

It is important to understand one final thing about source and destination IP pairs, that is the last octet of the IP address is the only factor used in caclulating the source and destination pair.  This would mean that IP Source 10.10.1.10 (only uses the 10 – last octet) and IP Destination 10.10.3.100 (only uses the 100 – last octet).    It is important to be aware that the last digits in the IP address are used for the calculation so that if you deploy hosts on multiple subnets the hosts with the same last octets will be transmitted on the same physical links.

IP Aliasing

Understanding Load-Balancing Algorithms allow you as an administrator to exploit them to your benefit.  All NetApp VIFs and physical interfaces have the ability to have an alias placed on the interface.  This is simply a additional address on the VIF itself.  I always consult with customers to place addresses (VIF + number of aliases) equal to the number of physical links in the EtherChannel between the Controller and the switch to which the controller is attached.  Therefore, if you have a 4 1Gbps MultiMode VIF between a Controller and Switch then place one address on the VIF and three aliases on that same VIF.

Simply placing the additional addresses will not exploit the advantage of additional addresses.  You must ensure that the hosts which mount data from the NetApp controllers utilize all of the addresses.   This can be achieved by a few different ways, depending on the protocol being utilized for storage access below are a few NFS examples.

Oracle NFS -  Oracle Hosts should mount NFS volumes by evenly distributing NFS mounts across the available Controller IP address.   If there are 4 different NFS mounts then mount the four via the four different IP address on the Controller.   Each mount will have a different source and destination pair and the communication from the host to controller will be efficiently utilized.

VMware NFS – ESX Hosts should mount each NFS Datastore via different IP addresses on the NetApp Controller. It is perfectly fine to utilize a single VMkernel interface (the source address) as long as you are mounting each datastore with different IP addresses on the Controller.  If you have more datastores than IP addresses then simply distribute the datastore mounts evenly across the IP addresses on the Controller.

Final note about aliases:  When administrators configure physical interfaces on NetApp controllers they typically partner those interfaces with the other controllers interfaces.  This ensures that failover of a controller will move the failed controllers interfaces to the surviving controller.  Anytime you place an alias on an interface, if you have partnered the physical, the aliases WILL travel to the clustered controller in failover. You do not partner the aliases, if the physical has already been partnered.

Finally the templates:

LACP – Dynamic MultiMode VIF

____________________________________

Filer RC File

#Manually Edited Filer RC file  3 March, 2009,  by Trey Layton

hostname filer a

vif create lacp template-vif1 -b ip e0a e0b e0c e0d

ifconfig template-vif1 10.10.3.100 netmask 255.255.255.0 mtusize 1500 partner (partner-vif-name)
ifconfig template-vif1 alias 10.10.3.101 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.102 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.103 netmask 255.255.255.0

route add default 10.10.3.1 1
routed on
options dns.domainname template.netapp.com
options dns.enable on
options nis.enable off
savecore

_____________________________________

Cisco Configuration

!!!!!!     The following interface is a virtual interface for the etherchannel.  This interface must be referenced
!!!!!!      on the physical interface to create the channel.

interface Port-channel 1
description Virtual Interface for Etherchannel to filer
switchport

switchport mode access

switchport nonegotiate
spanning-tree guard loop

spanning-tree portfast
!

!!!!!  The following are the physical interfaces in the channel.  The above is the virtual interface for the channel.
!!!!!  Each physical interface will reference the virtual interface.
interface GigabitEthernet 2/12
description filer interface e0a
switchport
switchport mode access

switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-protocol lacp
channel-group 1 mode active

!!!!!!
!!!!!!  The above channel-group command is the command which bonds the physical interface to the virtual interface
!!!!!!  previously created.  The command following the channel number is the mode – active is for LACP.
!!!!!!
!
interface GigabitEthernet 2/13
description filer interface e0b
switchport
switchport mode access

switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet 2/14
description filer interface e0c
switchport
switchport mode access

switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet 2/15
description filer interface e0d
switchport
switchport mode access

switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-protocol lacp
channel-group 1 mode active

Static EtherChannel – Static MultiMode VIF

____________________________________

Filer RC File

#Manually Edited Filer RC file  3 March, 2009,  by Trey Layton

hostname filer a

vif create multi template-vif1 -b ip e0a e0b e0c e0d

ifconfig template-vif1 10.10.3.100 netmask 255.255.255.0 mtusize 1500 partner (partner-vif-name)
ifconfig template-vif1 alias 10.10.3.101 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.102 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.103 netmask 255.255.255.0

route add default 10.10.3.1 1
routed on
options dns.domainname template.netapp.com
options dns.enable on
options nis.enable off
savecore

_____________________________________

Cisco Configuration

!!!!!!     The following interface is a virtual interface for the etherchannel.  This interface must be referenced
!!!!!!      on the physical interface to create the channel.

interface Port-channel 1
description Virtual Interface for Etherchannel to filer
switchport

switchport mode access
switchport nonegotiate
spanning-tree guard loop

spanning-tree portfast
!

interface GigabitEthernet 2/12
description filer interface e0a
switchport
switchport mode access
switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-group 1 mode on

!!!!!!
!!!!!!  The above channel-group command is the command which bonds the physical interface to the virtual interface
!!!!!!  previously created.  The command following the channel number is the mode – active is for LACP.
!!!!!!
!
interface GigabitEthernet 2/13
description filer interface e0b
switchport
switchport mode access
switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-group 1 mode on

!
interface GigabitEthernet 2/14
description filer interface e0c
switchport
switchport mode access
switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-group 1 mode on

!
interface GigabitEthernet 2/15
description filer interface e0d
switchport
switchport mode access
switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-group 1 mode on


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: