Skip to main content
NetApp Knowledge Base

Multiple disks not responding  after disk firmware upgrade

Views:
251
Visibility:
Public
Votes:
0
Category:
metrocluster
Specialty:
metrocluster
Last Updated:

Applies to

  • ONTAP 9
  • FAS/AFF
  • MetroCluster FC

Issue

  • Disk firmware upgrade is performed on the disks:

[Node-01: bg_disk_fw_update_admin: bdfu.selected:info]: Disk 101swa:8.126L502 [NETAPP   X343_TA14E1T8A10 NA02] S/N [Y9XXXXXXXX9D] selected for background disk firmware update.
[Node-01: dynamic_dev_qual_admin: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X343_TA14E1T8A10.NA04.LOD on 1 disk(s) of plex [Pool0]...
[Node-01: dynamic_dev_qual_admin: dfu.fwDownloaded:debug]: Firmware downloaded to disk 101swa:8.126L502 [NETAPP   X343_TA14E1T8A10 NA04] S/N [Y9XXXXXXXX9D] successfully.

  • Multiple disks are not responding after the disk firmware upgrade

[Node-01: config_thread: raid.config.filesystem.disk.not.responding:notice]: File system Disk /root_aggr101b/plex0/rg0/101swb:9.126L528 Shelf 60 Bay 1 [NETAPP   X343_TA14E1T8A10 NA02] S/N [Y9XXXXXXXX9D] UID [50000399:C803BE30:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] is not responding.
[Node-01: config_thread: raid.rg.degraded:notice]: : Raid group /root_aggr101b/plex0/rg0 is degraded
[Node-01: config_thread: callhome.fdsk.noio:error]: Call home for FILESYSTEM DISK NOT RESPONDING

[Node-01: config_thread: raid.config.spare.disk.not.responding:notice]: Spare Disk 101swb:8.126L511 Shelf 60 Bay 10 [NETAPP   X343_TA14E1T8A10 NA04] S/N [Y920XXXXXX9D] UID [50000399:C803C028:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] is not responding.
[Node-01: config_thread: callhome.sdsk.noio:error]: Call home for SPARE DISK NOT RESPONDING

  • The disks are reported as not responding only from one node:

::> system node run -node <node> sysconfig -r

Local broken disks

RAID Disk           Device               HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
---------           ------               ------------- ---- ---- ---- ----- --------------    --------------
not responding      101swa:8.126L502     2a    60  1   FC:A   0   SAS 10000 1713523/3509295616 1716957/3516328368
not responding      101swa:8.126L503     2a    60  2   FC:A   0   SAS 10000 1713523/3509295616 1716957/3516328368
not responding      101swa:8.126L510     2a    60  9   FC:A   0   SAS 10000 1713523/3509295616 1716957/3516328368
not responding      101swa:8.126L511     2a    60  10  FC:A   0   SAS 10000 1713523/3509295616 1716957/3516328368
not responding      101swa:8.126L1037    2a    91  10  FC:A   0  FSAS  7200 3807816/7798408704 3815447/7814037168
not responding      101swa:8.126L1038    2a    91  11  FC:A   0  FSAS  7200 3807816/7798408704 3815447/7814037168

  • The syncmirror plex failed alert may be received if multiple drives are missing from the same plex:

[Node-01: config_thread: raid.vol.mirror.degraded:alert]: Aggregate root_aggr101b is mirrored and one plex has failed. It is no longer protected by mirroring.
[Node-01: config_thread: callhome.syncm.plex:alert]: Call home for SYNCMIRROR PLEX FAILED

  • These disks are in normal status when checked from the other nodes.

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.