Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 36 results
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/Correctable_ECC_memory_error_on_NetApp_HCI_compute_nodes
      Applies to NetApp HCI compute nodes Issue On booting: Memory Correctable ECC | Asserted Failing DIMM location(correctable memory component found) <DIMM location> On BMC logs: Memory Correctable ECC | ...Applies to NetApp HCI compute nodes Issue On booting: Memory Correctable ECC | Asserted Failing DIMM location(correctable memory component found) <DIMM location> On BMC logs: Memory Correctable ECC | Asserted
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/PSOD_with_Internal_Parity_Error_on_H410C
      Applies to NetApp H410C Purple screen of death (PSOD) BIOS version NA3.4 and lower Issue Compute node has PSOD with the following line: cpu53:32856290MCA: 130 UC Excp G5 B2 Sb200000000020005 A0 M0 P0/...Applies to NetApp H410C Purple screen of death (PSOD) BIOS version NA3.4 and lower Issue Compute node has PSOD with the following line: cpu53:32856290MCA: 130 UC Excp G5 B2 Sb200000000020005 A0 M0 P0/0 Internal Parity Error
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/What_is_supported_with_NetApp_HCI_and_VMware_ESX_version_8.0_and_beyond
      The following matrix lists the last verified versions of ESXi, BIOS and NIC firmware tested by NetApp for NetApp HCI Compute nodes. For NetApp HCI compute nodes that are running newer versions of BIOS...The following matrix lists the last verified versions of ESXi, BIOS and NIC firmware tested by NetApp for NetApp HCI Compute nodes. For NetApp HCI compute nodes that are running newer versions of BIOS, ESXi or NIC firmware than those outlined in the matrix above, NetApp will provide hardware failure support only. Customers can upgrade to ESXi 8.0 on H410C, H610C and H615C compute nodes, but it will be a manual process and the official VMware documentation should be followed to install ESXi 8.0.
    • https://kb.netapp.com/hybrid/StorageGRID/Platforms/SG6000_SG1000_or_SG100_does_not_boot._Boot_device_not_found_and_is_stuck_in_boot_loop
      NetApp StorageGRID Appliance SG6000 NetApp StorageGRID Appliance SG1000 NetApp StorageGRID Appliance SG100 When trying to install a new StorageGRID compute node appliance, the compute node is stuck in...NetApp StorageGRID Appliance SG6000 NetApp StorageGRID Appliance SG1000 NetApp StorageGRID Appliance SG100 When trying to install a new StorageGRID compute node appliance, the compute node is stuck in a boot loop, and reports either of the following: boot device is not found. FlexBoot attempts to boot but then reboots. Attempts to boot from PXE / network boot then reboots. The NetApp logo screen may flash few times during a normal boot. This should not be considered part of the boot loop.
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/Can_NetApp_HCI_compute_node_report_to_the_Active_IQ_without_SolidFire_storage_cluster
      Applies to NetApp HCI compute nodes Answer No, we currently do not support this configuration in SolidFire Active IQ. NetApp HCI compute nodes can only appear in the context of a storage cluster. Addi...Applies to NetApp HCI compute nodes Answer No, we currently do not support this configuration in SolidFire Active IQ. NetApp HCI compute nodes can only appear in the context of a storage cluster. Additional Information additionalInformation_text
    • https://kb.netapp.com/Legacy/NetApp_HCI/Hardware/IPMI_access_to_all_compute_nodes_is_failed
      IPMI access to all compute nodes is failed ipmitool command fails to contact IPMI. admin@SF-xxxx ~ $ sudo ipmitool -I lanplus -H BMC_IP -U ADMIN -P Password sel list Error: Unable to establish IPMI v2...IPMI access to all compute nodes is failed ipmitool command fails to contact IPMI. admin@SF-xxxx ~ $ sudo ipmitool -I lanplus -H BMC_IP -U ADMIN -P Password sel list Error: Unable to establish IPMI v2 / RMCP+ session Error: Unable to establish LAN session Error: Unable to establish IPMI v1.5 / RMCP session Power drain from the compute node can temporarily resolve this issue but the same issue returns after about 15~20 minutes
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/Uplinks_flapping_on_H300E
      <NMLX_INF> nmlx5_core: vmnic4: nmlx5_en_UplinkQGetStats - (vmkdrivers/native/BSD/Network/mlnx/nmlx5/nmlx5_core/nmlx5_core_en_multiq.c:1777) not supported <NMLX_INF> nmlx5_core: vmnic5: nmlx5_en_Uplink...<NMLX_INF> nmlx5_core: vmnic4: nmlx5_en_UplinkQGetStats - (vmkdrivers/native/BSD/Network/mlnx/nmlx5/nmlx5_core/nmlx5_core_en_multiq.c:1777) not supported <NMLX_INF> nmlx5_core: vmnic5: nmlx5_en_UplinkQGetStats - (vmkdrivers/native/BSD/Network/mlnx/nmlx5/nmlx5_core/nmlx5_core_en_multiq.c:1777) not supported cpu11:2102461)WARNING: cswitch: VLAN_PortGetVLANData:397: [nsx@6876 comp="nsx-esx" subcomp="vswitch"]No vlan data for non dvs ports or ports without port group 0x2000001
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/MLAG_on_Mellanox_switch_reports_Failed_to_grow_the_pool_size_err_-12
      Some virtual machines(VMs) are not able to be pinged or reached from devices within same LAN or connect to external network. When these occurs, there may be associated errors in the switch logs (as ge...Some virtual machines(VMs) are not able to be pinged or reached from devices within same LAN or connect to external network. When these occurs, there may be associated errors in the switch logs (as generated by sysdump) along these lines: HCI-MLX2 mlagd[4799]: TID 140503404242688: [mlagd.NOTICE]: [MLAG_MAC_SYNC_PEER_MANAGER.NOTICE] Failed to grow the pool size, err -12
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/NetApp_HCI_Compute_node_unresponsive_upon_switch_reboot
      Applies to NetApp HCI Compute nodes Issue ESXi host unresponsive during network switch upgrade
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/ipmitool_is_not_installed_by_default_on_HCI_compute_nodes
      Applies to NetApp HCI compute nodes Issue ipmitool command is not installed by default on the ESXi image of NetApp HCI compute nodes
    • https://kb.netapp.com/Legacy/NetApp_HCI/Hardware/Compute_POST_halts_with_Failing_DIMM_message_and_shows_size_as_0MiB
      NetApp HCI Compute Nodes After powering on an HxxxC Compute node, the boot sequence halts with the following message: (runtime)Failing DIMM: DIMM location and Mapped-Out P2-DIMME2 When checking the Fa...NetApp HCI Compute Nodes After powering on an HxxxC Compute node, the boot sequence halts with the following message: (runtime)Failing DIMM: DIMM location and Mapped-Out P2-DIMME2 When checking the Failing DIMM status from within the nodes BMC/IPMI settings under the System / Hardware Information / DIMM tab, the Size is reported as 0 MiB instead of the actual size (e.g. Max Capable Speed: 2666 MHz Operating Speed: 2666 MHz Size: 0 MiB Serial No.: Firmware Build Time: System BIOS Build Time: