Skip to main content
NetApp Knowledge Base

X4020 NVMe drives on NA51 firmware may timeout and fail leading to multidisk panic

Views:
188
Visibility:
Public
Votes:
0
Category:
aff-series
Specialty:
hw
Last Updated:

Applies to

  • ONTAP 9
  • X4020A (X4020S173A15TNQF) NVMe drive

Issue

  • Disks begin timing out and report SCSI check condition errors and path retries (these errors are repeated multiple times):

Sun May 02 12:20:30 -0600 [CLUSTER-01: scsi_cmdblk_strthr_admin: scsi.cmd.checkCondition:error]: Disk device 0n.14L0: Check Condition: CDB 0x28:79bf3200:0040: Sense Data SCSI:aborted command -  (0xb - 0x90 0x6 0xfa)(5240). 
[?] Sun May 02 12:20:30 -0600 [CLUSTER-01: scsi_cmdblk_strthr_admin: disk.timeout.flush.start:debug]: Aggressive timeout flush started on disk 0n.14.
Sun May 02 12:20:30 -0600 [CLUSTER-01: disk_server_0: disk.IO.status:debug]: params: {'deviceName': '0n.14L0', 'returnCode': '9', 'pathRetryCount': '0', 'adapterStatus': '0x5', 'cdb': '0x28:7d726b10:0001', 'basicTimeout': '5', 'iASCQ': '0x0', 'iSenseKey': '0x0', 'sSenseCode': '', 'ETime': '5233', 'iASC': '0x0', 'victimRetryCount': '0', 'sSenseKey': 'SCSI:no sense', 'targetStatus': '0x0', 'disk_information': 'Disk 0n.14 Shelf 0 Bay 14 [NETAPP   X4020S173A15TNQF NA51] S/N [S123AB4CD45678] UID [36313230:4EC00745:00253845:00000003:00000000:00000000:00000000:00000000:00000000:00000000]', 'retryCount': '0', 'pathsTried': '0', 'timeoutRetryCount': '0'}
Sun May 02 12:20:30 -0600 [CLUSTER-01: scsi_cmdblk_strthr_admin: scsi.cmd.notReadyConditionEMSOnly:debug]: Disk device 0n.14L0: Device returns not yet ready: CDB 0x28:79bf3200:0040: Sense Data SCSI:not ready - Drive spinning up (0x2 - 0x4 0x1 0x82)(5240). 

  • This then results in disk failures:

Sun May 02 12:27:12 -0600 [CLUSTER-01: scsi_cmdblk_strthr_admin: scsi.cmd.pastTimeToLive:error]: Disk device 0n.14L0: request failed after try #2: cdb 0x28:7a19bdc0:0040. 
Sun May 02 12:27:12 -0600 [CLUSTER-01: disk_server_0: scsi.debug:debug]: shm_setup_for_failure disk 0n.14 (S/N S123AB4CD45678) error 80000000h
Sun May 02 12:27:12 -0600 [CLUSTER-01: disk_server_0: scsi.debug:debug]: shm_setup_for_failure disk 0n.14 (S/N S123AB4CD45678) error 40000000h

  • This may result in orphaned disks, multi-disk panics, and failed aggregates and messages the node has rebooted after power-on:

Sun May 02 12:30:14 -0600 [CLUSTER-01: config_thread: raid.assim.cls.notInCls:debug]: Orphaning disk 0n.14P3 in plex aggr1/0, because it is not in the CLS.
Sun May 02 12:30:14 -0600 [CLUSTER-01: config_thread: raid.assim.disk.spare:notice]: Sparing Disk 0n.14P3 Shelf 0 Bay 14 [NETAPP   X4020S173A15TNQF NA51] S/N [S123AB4CD45678NP003] UID [66313230:4EC00745:00253845:00000003:500A0981:00000003:00000000:00000000:00000000:00000000], because volume (aggr1) is online and complete
Sun May 02 12:30:14 -0600 [CLUSTER-01: config_thread: callhome.disk.orphan:notice]: Call home for ORPHAN DISK PROCESSING.

Sun May 02 12:30:25 -0600 [CLUSTER-01: config_thread: sk.panic:alert]: Panic String: aggr aggr1: raid volfsm, fatal multi-disk error.. Raid type - raid_dp Group name plex0/rg0 state DOUBLERECONS. 1 disk failed in the group. Disk 0n.14P2 Shelf 0 Bay 14 [NETAPP X4020S173A15TNQF NA51] S/N [S123AB4CD45678NP003] UID [66313230:4EC00745:00253845:00000003:500A0981:00000003:00000000:00000000:00000000:00000000] error: fatal disk error. in SK process config_thread on release 9.8P2 (C)
 
Sun May 02 12:30:53 -0600 [CLUSTER-01: config_thread: raid.vol.failed:notice]: Aggregate aggr1: Failed due to multi-disk error.
 
Sun May 02 12:33:03 -0600 [CLUSTER-01: send_boot_msg_thread: mgr.boot.reason_ok:notice]: System rebooted after power-on.

 

CUSTOMER EXCLUSIVE CONTENT

Registered NetApp customers get unlimited access to our dynamic Knowledge Base.

New authoritative content is published and updated each day by our team of experts.

Current Customer or Partner?

Sign In for unlimited access

New to NetApp?

Learn more about our award-winning Support

 

******************************************************* *******************************************************