sas.port.down:debug message for SAS port went down
Applies to
- ONTAP 9
- FAS/AFF Systems
Issue
- EMS Log message example for SAS port down:
Wed Aug 25 19:34:43 +0300 [Filer1: pmcsas_admin_0: sas.port.down:debug]: SAS port "0b" went down.
- Timeout messages for that SAS port before going down. Example:
Wed Aug 25 19:34:36 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.2. Quiescing the device.
Wed Aug 25 19:34:38 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.0. Quiescing the device.
Wed Aug 25 19:34:38 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.4. Quiescing the device.
Wed Aug 25 19:34:38 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.6. Quiescing the device.
Wed Aug 25 19:34:39 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.8. Quiescing the device.
Wed Aug 25 19:34:40 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.3. Quiescing the device.
Wed Aug 25 19:34:40 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on disk device 0b.00.10. Quiescing the device.
Wed Aug 25 19:34:40 +0300 [Filer1: pmcsas_timeout_0: sas.device.quiesce:debug]: Adapter 0a encountered a command timeout on enclosure services device 0b.00.99. Quiescing the device.
- The applications or users may experience latency.
- Example: When the errors above are seen, the
qos statistics volume latency show
command shows high disk latency, whilestatit
shows some disks with higher utilization and latency.- This, combined with the EMS messages, can confirm a signature if latency is seen.
- Note: The latency may not be seen but is a possible signature.
-
cluster1::> qos statistics volume latency show -vserver svm1 -volume vol1 Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -total- - 204.71ms 185.00us 7.00us 468.00us 204.04ms 0ms 0ms 11.00us vol1-wid32140 32140 222.22ms 86.00us 125.00us 904.00us 221.11ms 0ms 0ms 1.00us Cluster::> set advanced; node run -node node1 statit -b (wait 30 seconds) Cluster::*>set advanced; node run -node node1 statit -e ... disk ut% xfers ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs /data_aggr1/plex0/rg0: 0a.11.1 10 23.42 0.00 .... . 11.21 61.56 133 12.21 57.15 50 0.00 .... . 0.00 .... . 0a.10.1 11 23.80 0.00 .... . 11.66 59.25 147 12.14 57.37 55 0.00 .... . 0.00 .... . 1a.12.1 50 158.69 120.29 5.43 2299 10.26 34.39 553 28.15 9.93 839 0.00 .... . 0.00 .... . 0a.11.2 49 156.09 118.85 5.39 2253 9.96 34.97 553 27.28 10.05 794 0.00 .... . 0.00 .... . 0d.12.2 35 153.05 115.90 5.59 858 9.88 35.00 224 27.27 9.97 203 0.00 .... . 0.00 .... . 0d.11.3 36 162.10 124.59 5.45 862 10.05 35.10 228 27.45 10.10 209 0.00 .... . 0.00 .... . 0d.10.3 37 158.72 121.88 5.39 898 9.82 34.82 273 27.02 10.05 238 0.00 .... . 0.00 .... .
-