Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 51 results
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Failed_to_initiate_takeover_Reason_Node_can_t_see_some_of_the_partner_s_disk_because_multiple_disks_missing_from_sysconfig_a_on_local_node
      During ONTAP upgrade, after node reboot to new image, all of slot 1 disks missing from SYSCONFIG-A on local node. 29Oct2023 13:48:32 cf_disk_inventory_mismatch diskname="1b.33.19" uid="xxxxxxxx:yyyyyy...During ONTAP upgrade, after node reboot to new image, all of slot 1 disks missing from SYSCONFIG-A on local node. 29Oct2023 13:48:32 cf_disk_inventory_mismatch diskname="1b.33.19" uid="xxxxxxxx:yyyyyyyy:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000" host="Node01" Takeover of partner node fails due to Reason:Node can't see some of the partner's disk. Reason:Node can't see some of the partner's disk.
    • https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/How_long_does_LIF_failover_take_at_maximum_in_general
      Applies to ONTAP 9 Answer LIF failover target outage threshold is 45 seconds for all cases primarily due to CIFS NDO lite requirements, overall requirement is not fall out of 60 seconds range in any c...Applies to ONTAP 9 Answer LIF failover target outage threshold is 45 seconds for all cases primarily due to CIFS NDO lite requirements, overall requirement is not fall out of 60 seconds range in any case. Link states are needed to determine whether it's okay to host a LIF on a given port. Once the link state changes to unhealthy, VifMgr will migrate the LIF to another up or healthy port immediately. Additional Information additionalInformation_text
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/Used_Datastore_size_on_vSphere_client_doesnt_match_used_Block_Storage_size_on_SolidFire_Cluster_GUI
      Applies to NetApp Element software Issue Used datastore size on vSphere Client doesn't match used Block Storage size on SolidFire Cluster GUI The used datastore size (Total Capacity - Free Capacity) o...Applies to NetApp Element software Issue Used datastore size on vSphere Client doesn't match used Block Storage size on SolidFire Cluster GUI The used datastore size (Total Capacity - Free Capacity) on vSphere Client is about 6.2 TB ((2147 + 2147 + 10239) - (1629 + 2145 + 4554)) in the following example: vSphere client: The used Block Storage size on SolidFire Cluster GUI is 8.2 TB and it doesn't match 6.2 TB SolidFire Cluster GUI:
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/AFF_FAS_System_level_diagnostics_fail_to_identify_X1148A_include_fix_of_Bug_ID_1352289
      Applies to AFF A320 AFF A400, AFF C400 ASA A400, ASA C400 FAS8700, FAS8300 Issue System-level diagnostics fails to correctly identify some X1148A 100GbE network interface cards (NIC) during the system...Applies to AFF A320 AFF A400, AFF C400 ASA A400, ASA C400 FAS8700, FAS8300 Issue System-level diagnostics fails to correctly identify some X1148A 100GbE network interface cards (NIC) during the system scan. As a result, diagnostic tests will not be available for the X1148A NIC: System-Level HW Diagnostics 04.05.12 PCIE: FAILED Unsupported PCIE on slot 4 Slot PN SN Description 4 111-04025 MT23424005EB None
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/SPARES_LOW_report_on_a_system_that_has_enough_spare_disks_on_both_nodes_in_ADP_configration
      Applies to ONTAP 9 Advanced Disk Partitioning (ADP) active-passive configuration Issue Hot spare disks don't get auto-partitioned even in "spares low" condition ONTAP keeps sending weekly SPARES_LOW A...Applies to ONTAP 9 Advanced Disk Partitioning (ADP) active-passive configuration Issue Hot spare disks don't get auto-partitioned even in "spares low" condition ONTAP keeps sending weekly SPARES_LOW Autosupport
    • https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/Nblade.vscanNoScannerConn_after_upgrading_to_ONTAP_version_between_9.4_and_9.7
      ONTAP was upgraded to a version that is higher than or equal to 9.4 and lower than 9.7 Example: The actual domain name is "xxx.yyy.zzz", but it appears as "xxx" in error[?] Sun Nov 13 16:17:30 JST [cl...ONTAP was upgraded to a version that is higher than or equal to 9.4 and lower than 9.7 Example: The actual domain name is "xxx.yyy.zzz", but it appears as "xxx" in error[?] Sun Nov 13 16:17:30 JST [cluster01: kernel: Nblade.vscanConnInvalidUser:notice]: For Vserver "svm1", the vscan connection request coming from client "<Client_IP>" is rejected because the logged-in user "xxx\user1" is not configured in any of the Vserver active scanner pools.
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/SAS_Expander_has_set_the_Chassis_LED_ON_remains_after_PSU_replacement_even_if_reboot_BMC
      Chassis attention LED remains on after PSU replacement: FRU LED ID 1 = BMC Locate LED FRU LED ID 2 = BMC System LED FRU LED ID 3 = BMC Controller Attention LED FRU LED ID 4 = BMC Controller Active LED...Chassis attention LED remains on after PSU replacement: FRU LED ID 1 = BMC Locate LED FRU LED ID 2 = BMC System LED FRU LED ID 3 = BMC Controller Attention LED FRU LED ID 4 = BMC Controller Active LED Node-01 1 Chassis 1 SAS Expander has set the Chassis LED ON Node-02 1 Chassis 1 SAS Expander has set the Chassis LED ON Even if delete the service event then reboot BMC, or manually set the led off, issue persists, and the service event raised again.
    • https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/IFGRP_Tx_Lagg_drops_count_increasing_during_its_member_port_link_flapping
      Wed May 24 13:45:17 +0900 [nodename: vifmgr: vifmgr.cluscheck.crcerrors:alert]: Port a1a on node nodename is reporting a high number of observed hardware errors, possibly CRC errors. Wed May 24 13:45:...Wed May 24 13:45:17 +0900 [nodename: vifmgr: vifmgr.cluscheck.crcerrors:alert]: Port a1a on node nodename is reporting a high number of observed hardware errors, possibly CRC errors. Wed May 24 13:45:17 +0900 [nodename: vifmgr: vifmgr.cluscheck.crcerrors:alert]: Port a1a-65 on node nodename is reporting a high number of observed hardware errors, possibly CRC errors.
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/scsi_cmdblk_strthr_admin__scsi_cmd_excessiveVictim
      Tue Oct 10 05:13:46 +0900 [Node-01: scsi_cmdblk_strthr_admin: scsi.cmd.excessiveVictim:debug]: device 3a.02.16: excessive victim abort: delta time 99000: retry CDB 0x5f:0601: victim retry count 100: r...Tue Oct 10 05:13:46 +0900 [Node-01: scsi_cmdblk_strthr_admin: scsi.cmd.excessiveVictim:debug]: device 3a.02.16: excessive victim abort: delta time 99000: retry CDB 0x5f:0601: victim retry count 100: retry count 0: timeout retry count 0: path retry count 0: adapter status 0x0: target status 0x2: sense data SCSI:not ready - Drive spinning up (0x2 - 0x4 0x1 ).
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Intermittent_PSU_Fan_error
      Fri Dec 30 03:38:58 +0900 [Nodename-01: dsa_worker0: ses.status.fanError:EMERGENCY]: DS224-12 (S/N SHFxxxxxxxxxxxx) shelf 18 on channel 0c cooling fan error for Cooling element 4: critical status. Fri...Fri Dec 30 03:38:58 +0900 [Nodename-01: dsa_worker0: ses.status.fanError:EMERGENCY]: DS224-12 (S/N SHFxxxxxxxxxxxx) shelf 18 on channel 0c cooling fan error for Cooling element 4: critical status. Fri Dec 30 03:39:34 +0900 [Nodename-01: dsa_worker1: ses.status.fanInfo:info]: DS224-12 (S/N SHFxxxxxxxxxxxx) shelf 18 on channel 0c cooling fan information for Cooling element 4: normal status.
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/vifmgr_cluscheck_hwerrors_alert_report_in_ifgrp_a0a_due_to_CRC_error
      Tue Jan 23 02:24:24 +0900 [Node-01: vifmgr: vifmgr.cluscheck.hwerrors:alert]: Port a0a on node Node-01 is reporting a high number (at least 1 per 1000 packets) of observed hardware errors (CRC, length...Tue Jan 23 02:24:24 +0900 [Node-01: vifmgr: vifmgr.cluscheck.hwerrors:alert]: Port a0a on node Node-01 is reporting a high number (at least 1 per 1000 packets) of observed hardware errors (CRC, length, alignment, dropped). Tue Jan 23 03:24:25 +0900 [Node-01: vifmgr: vifmgr.cluscheck.hwerrors:alert]: Port a0a on node Node-01 is reporting a high number (at least 1 per 1000 packets) of observed hardware errors (CRC, length, alignment, dropped).