Multiple drives are failing for exceeding latency threshold on NS224
Applies to
- AFF-C250
- NS224
- MetroCluster
Issue
- The drives are failing for exceeding latency threshold.
- The EMS reports multiple drive failures due to exceeding block latency and average latency
[Node01: disk_latency_monitor: shm.ssd.threshold.ioLatency:notice]: SSD 0m.i2.3L2 has exceeded the expected block latency in the current timeframe with an average
latency of 11952 us and an average utilization of 4 percent. The next highest SSD latency: 251 us. Disk 0m.i2.3L2 Shelf 1 Bay 1
[Node01: disk_latency_monitor: shm.threshold.highIOLatency:error]: Disk 0m.i2.3L2 exceeds the average IO latency threshold and will be recommended for failure.
[Node01: disk_latency_monitor: scsi.debug:debug]: shm_setup_for_failure disk 0m.i2.3L2 (S/N XXXXXXXXX) error 200000h
- Also the ethernet port e1a report downstatus in EMS:
[Node01: kernel: netif.linkDown:info]: Ethernet e1a: Link down, check cable.
[Node01: intr: netif.linkDown:info]: Ethernet e1a-30: Link down, check cable.
[Node01: kernel: netif.linkUp:info]: Ethernet e1a: Link up.
[Node01: intr: netif.linkUp:info]: Ethernet e1a-30: Link up.
- The ifstat-A shows below for e1a:
---- Ethernet_Storage IPSpace ----
-- interface e1a (5 days, 0 hours, 20 minutes, 24 seconds) --
LINK INFO
Speed: 100G | Duplex: full | Flowcontrol: none
Media state: active | Up to downs: 676 | HW assist: 5655