Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 8 results
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/HA_links_down_on_the_nodes_with_internal_interconnect
      The HA interconnect is via the midplane and the event logs report the links as down: Sat Oct 23 21:35:49 +0200 [node: gop_eq_thread: ic.linkStatusChange:info]: HA interconnect: Port ic0a link is down....The HA interconnect is via the midplane and the event logs report the links as down: Sat Oct 23 21:35:49 +0200 [node: gop_eq_thread: ic.linkStatusChange:info]: HA interconnect: Port ic0a link is down. Sat Oct 23 21:36:13 +0200 [node: cfdisk_config: cf.diskinventory.sendFailed:debug]: params: {'errorCode': '0', 'reason': 'HA Interconnect down'} The HA links may also be seen flapping in some cases. Use the "cluster ha modify -configured An attempt to configure HA fails with the below error:
    • https://kb.netapp.com/Cloud/Cloud_Volumes_ONTAP/CVO_HA_Group_Notification_HA_INTERCONNECT_DOWN_ALERT
      Applies to Cloud Volumes ONTAP Issue Cluster is reporting HA interconnect down: HA Group Notification (HA INTERCONNECT DOWN) ALERT Giveback not possible One node is not accessible
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Added_FAS500F_to_existing_cluster_having_HA_Interconnect_issues
      Attempting to add 2 new FAS500f nodes to an existing cluster. Nodes are able to be added to the cluster successfully. cluster1-01 cluster1-02 true Connected to cluster1-02 cluster1-02 cluster1-01 true...Attempting to add 2 new FAS500f nodes to an existing cluster. Nodes are able to be added to the cluster successfully. cluster1-01 cluster1-02 true Connected to cluster1-02 cluster1-02 cluster1-01 true Connected to cluster1-01 cluster1-03 cluster1-04 true Connected to cluster1-04 cluster1-04 cluster1-03 true Connected to cluster1-03 cluster1-05 cluster1-06 false Waiting for cluster1-06, cluster1-06 cluster1-05 false Waiting for cluster1-05, Takeover is not
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/What_will_happen_if_both_MetroCluster_IP_switches_that_are_at_one_site_are_not_working
      Applies to MetroCluster IP Answer SyncMirror to the remote site will stop DR replication to the remote site will stop Cluster goes out of quorum(OOQ) Data service continues based on the protocol ident...Applies to MetroCluster IP Answer SyncMirror to the remote site will stop DR replication to the remote site will stop Cluster goes out of quorum(OOQ) Data service continues based on the protocol identically to a non MetroCluster ONTAP 9 cluster. Be aware that outages may occur. Additional Information additionalInformation_text
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/HA_interconnect_down_on_newly_added_AFF_A250_to_existing_cluster
      Added AFF A250, AFF C250, ASA A250, ASA C250 or FAS500f to an existing cluster AFF A250, AFF C250, ASA A250, ASA C250 or FAS500f successfuly added to an existing cluster [cluster1: statd: ic.HAInterco...Added AFF A250, AFF C250, ASA A250, ASA C250 or FAS500f to an existing cluster AFF A250, AFF C250, ASA A250, ASA C250 or FAS500f successfuly added to an existing cluster [cluster1: statd: ic.HAInterconnectDown:error]: HA interconnect: Interconnect down for 18328 minutes: all links are down [cluster1: statd: callhome.hainterconnect.down:alert]: Call home for HA INTERCONNECT DOWN due to all links are down.
    • https://kb.netapp.com/on-prem/ontap/ontap-select/Select-KBs/ONTAP_Select_node_boot_error_Waiting_for_requisite_number_of_Mailboxes
      When you open the vCenter console for each Select Node VM, one or both show a boot loader message Waiting for requisite number of Mailboxes. Deploy UI may display the following error:page cannot be fo...When you open the vCenter console for each Select Node VM, one or both show a boot loader message Waiting for requisite number of Mailboxes. Deploy UI may display the following error:page cannot be found" or "502 Bad Gateway. Logging in to the Deploy UI (either through console or SSH) the following error may be displayed: NetApp ONTAP Select Deploy Utility. Deploy API service is currently not available. (Press Ctrl-C to continue to the shell) <center><h1>502 Bad Gateway</h1></center>
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/HA_interconnect_down_on_newly_added_AFF_A250
      Newly added AFF A250 to existing cluster AFF A250 added to existing cluster network successfully nvmm.mirror.aborting: mirror of sysid 1, partner_type HA Partner and mirror state NVMM_MIRROR_OFFLINE i...Newly added AFF A250 to existing cluster AFF A250 added to existing cluster network successfully nvmm.mirror.aborting: mirror of sysid 1, partner_type HA Partner and mirror state NVMM_MIRROR_OFFLINE is aborted because of reason NVMM_ABORT_SYNCING_MIRROR. nvmm.mirror.aborting: mirror of sysid 1, partner_type HA Partner and mirror state NVMM_MIRROR_LAYOUT_SYNCING is aborted because of reason NVPM_ERR_MSG_SEND_FAILED.cf.diskinventory.sendFailed: reason="HA Interconnect down", errorCode="0"
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/HA_Interconnect_down_on_AFF_A320
      [node_name: nvmm_mirror_sync: nvmm.mirror.aborting:debug]: mirror of sysid 1, partner_type HA Partner and mirror state NVMM_MIRROR_LAYOUT_SYNCING is aborted because of reason NVPM_ERR_MSG_SEND_FAILED....[node_name: nvmm_mirror_sync: nvmm.mirror.aborting:debug]: mirror of sysid 1, partner_type HA Partner and mirror state NVMM_MIRROR_LAYOUT_SYNCING is aborted because of reason NVPM_ERR_MSG_SEND_FAILED. [node_name: nvmm_error: nvmm.mirror.aborting:debug]: mirror of sysid 1, partner_type HA Partner and mirror state NVMM_MIRROR_OFFLINE is aborted because of reason NVMM_ABORT_SYNCING_MIRROR.