FILESYSTEM DISK BAD LABEL - AutoSupport Message
Applies to
- ONTAP 9
- Data ONTAP 8 7-mode
- callhome.dsk.label
- callhome.fdsk.label
- raid.config.filesystem.disk.bad.label
Event Summary
- This message occurs when a disk drive fails due to unexpected RAID metadata.
- Usually caused by adding non-zeroed drives that were previously used in a different ONTAP OS.
- The drives have not truly failed. Due to the mismatch of ONTAP versions, the disks show up as
bad labelfailed.
[Node Name: config_thread: callhome.dsk.label:alert]: Call home for DISK BAD LABEL
Validate
Event Log
event log show -severity * -Message-name *bad.label*
Fri Apr 29 14:11:31 +0200 [Node Name: config_thread: raid.assim.disk.nolabels:EMERGENCY]: Disk 0d.23.17 Shelf 23 Bay 17 [NETAPP X357_S163A3T8ATE NA51] S/N [S394NA0J219221] UID [5002538A:072721E0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Fri Apr 29 14:11:31 +0200 [Node Name: config_thread: raid.config.disk.bad.label:notice]: Disk 0d.23.17 Shelf 23 Bay 17 [NETAPP X357_S163A3T8ATE NA51] S/N [S394NA0J219221] UID [5002538A:072721E0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has bad or missing data on label.
Command Line
storage disk show -broken
::> storage disk show -broken
Original Owner: node1
Checksum Compatibility: block
Usable Physical
Disk Outage Reason HA Shelf Bay Chan Pool Type RPM Size Size
--------------- ------------- ------------ ---- ------ ----- ------ -------- --------
1.1.0 bad label 0b 1 0 A Pool0 FCAL 10000 132.8GB 133.9GB
1.2.6 bad label 0b 2 6 A Pool1 FCAL 10000 132.8GB 134.2GB
Original Owner: node2
Checksum Compatibility: block
Usable Physical
Disk Outage Reason HA Shelf Bay Chan Pool Type RPM Size Size
--------------- ------------- ------------ ---- ------ ----- ------ -------- --------
1.1.0 bad label 0a 1 0 B Pool0 FCAL 10000 132.8GB 133.9GB
1.1.13 bad label 0a 1 13 B Pool0 FCAL 10000 132.8GB 133.9GB
Resolution
| The following action plan assumes that the disks are not critical data disks being migrated to the new system. Verify that the disks in question do not store production data. |
- Unfailing the drives
- Unfail the drives to return them to the spare pool
::> priv set advanced
::*> storage disk unfail -s <disk_name>
- Repeat the above steps as many times as needed to unfail the remaining drives
- Check to see if the recently unfailed drives show up in the spare pool
::*> storage aggr status -s
- The next step will be to clear the bad label by Zeroing the drives
::*> storage disk zerospares -owner <nodename>
- The bad labels will clear once the disk zero is complete
Additional Information
Common causes for this AutoSupport Message:
| Drive sanitization was applied to the drives | Disks are not owned or report a Bad label after running the disk sanitize release command |
| MetroCluster IP- "Bad Label" immediately after running "metrocluster configuration-settings connection connect" | Disk bad label after "metrocluster configuration-settings connection connect" |
