Search
- Filter results by:
- View attachments
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/e0a_e0b_link_flaps_on_A300_FAS8200_A200_FAS2600_A220_FAS2700_C190_may_cause_a_TakeoverCluster ports e0a/e0b links flap or go down at the same time, with or without a node takeover. This could be caused by hardware failure on one of the nodes. KB article provides steps to identify the a...Cluster ports e0a/e0b links flap or go down at the same time, with or without a node takeover. This could be caused by hardware failure on one of the nodes. KB article provides steps to identify the affected node and resolve the issue.
- https://kb.netapp.com/hybrid/StorageGRID/Platforms/Storage_appliance_link_down_on_network_port_alert_for_one_node_but_the_node_is_in_optimal_statusApplies to Storagegrid 11.5 Storagegrid appliance Issue Storage appliance link down on network port alert for one node in Grid Manager No errors under Support>Grid topology >Node >SSM> Resources Cable...Applies to Storagegrid 11.5 Storagegrid appliance Issue Storage appliance link down on network port alert for one node in Grid Manager No errors under Support>Grid topology >Node >SSM> Resources Cables, SFPs, and physical connections in working status Node reboot did not clear the alert
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/SFP_reporting_High_TX_RX_causing_link_downCurrent 8.33 mA 13.00 mA 3.00 mA 11.00 mA 5.00 mA Tx Power -0.87 dBm 4.99 dBm -10.00 dBm 2.99 dBm -8.01 dBm Current 8.40 mA 13.00 mA 3.00 mA 11.00 mA 5.00 mA Tx Power -0.83 dBm 4.99 dBm -10.00 dBm 2.9...Current 8.33 mA 13.00 mA 3.00 mA 11.00 mA 5.00 mA Tx Power -0.87 dBm 4.99 dBm -10.00 dBm 2.99 dBm -8.01 dBm Current 8.40 mA 13.00 mA 3.00 mA 11.00 mA 5.00 mA Tx Power -0.83 dBm 4.99 dBm -10.00 dBm 2.99 dBm -8.01 dBm Rx Power N/A 3.39 dBm -14.08 dBm 2.39 dBm -11.02 dBm Rx Power -1.15 dBm 3.39 dBm -14.08 dBm 2.39 dBm -11.02 dBm Tx Power 8.16 dBm ++ 4.99 dBm -10.00 dBm 2.99 dBm -8.01 dBm Rx Power 8.16 dBm ++ 3.39 dBm -14.08 dBm 2.39 dBm -11.02 dBm
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Ethernet_Link_flapping_on_ONTAP_nodeThis article offers solution when there is a EMS event: [Node1: vifmgr: vifmgr.port.monitor.failed:error]: The "link_flapping" health check for port e0f (node Node1) has failed. The port is operating ...This article offers solution when there is a EMS event: [Node1: vifmgr: vifmgr.port.monitor.failed:error]: The "link_flapping" health check for port e0f (node Node1) has failed. The port is operating in a degraded state
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Link_down_on_cluster_portArticle covers solution when in EMS port goes down and LIF migrates to surviving port
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/FC_link_down::> network interface show -data-protocol fcp Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ --...::> network interface show -data-protocol fcp Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- vs1 lif up/down 20:xx:xx:xx:xx:xx:xx:xx nodename 10c true ::> fcp adapter show -instance -node node -adapter 10c [...] Received Optical Power: 0 (uWatts) Is Received Power In Range: false SFP Transmitted Optical Power: 0 (uWatts) Is Xmit Power In Range: false [...]
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Data_Network_port_is_down_even_after_swapping_with_working_SFPThis article applies to AFF and FAS systems. It outlines the solution for Data/Network port link down even after swapping with a working SFP due to a faulty port on node or switch.
- https://kb.netapp.com/on-prem/ontap/ontap-select/Select-KBs/OTS__After_physical_switch_upgrade_error_reported__RPC__Couldn_t_make_connectionApplies to ONTAP Select Issue Replaced physical uplink switch and cluster would not give back and error reported: RPC: Couldn't make connection [from mgwd on node "Node02" (VSID: -1) to mgwd at 169.25...Applies to ONTAP Select Issue Replaced physical uplink switch and cluster would not give back and error reported: RPC: Couldn't make connection [from mgwd on node "Node02" (VSID: -1) to mgwd at 169.254.30.122] When checking the state of the nodes: Cluster01::> storage failover show Takeover Node Partner Possible State Description NODE01 NODE02 - Unknown <------------------------ NODE02 NODE01 false In takeover 2 entries were displayed.