Skip to main content
NetApp Knowledge Base

Performance implications of using mixed 40G and 10G cluster interconnects

Views:
1,464
Visibility:
Public
Votes:
0
Category:
fas-systems
Specialty:
hw
Last Updated:

Applies to

  • ONTAP 9
  • CN1610 / Nexus Cisco cluster interconnect switches

Issue

  • Increased latency over the cluster interconnect is observed when accessing data indirectly, along with port errors in netstat and/or  ifstat in a cluster of different platform models having 40G supported in one HA pair and 10G supported in another HA pair with data from 40G nodes being accessed through 10G nodes.
  • As it can cause pressure on the node(s) sending and/or receiving the traffic which shows up as retransmits in netstat  or buffer issues/packet drops on the ifstat -a output when a data Lif connects indirectly to access data (over the cluster interconnect). 

 

 

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.