Search
- Filter results by:
- View attachments
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/How_does_NetApp_ensure_the_integrity_of_data_on_returned_disksDisk Drives While handling Return Merchandise Authorization (RMA) disks: NetApp has established procedures for the management of Hard Disk Drives returned through the RMA process from the point of shi...Disk Drives While handling Return Merchandise Authorization (RMA) disks: NetApp has established procedures for the management of Hard Disk Drives returned through the RMA process from the point of shipment through testing, overwriting, and final disposition. At no point in this process do any NetApp or NetApp-contracted personnel access customer data residing on a returned drive. For more information regarding this process, refer to the Hard Disk Drive RMA & Data Overwriting Process page.
- https://kb.netapp.com/Support/csm-sam/What_are_the_key_considerations_for_moving_storage_between_storage_systemsApplies to FAS/AFF Shelf Drive Log in to see the answer. Additional Information Add drives to a node or shelf How to Repurpose an HA Pair with ONTAP 9 How to remove a shelf Will changing the physical ...Applies to FAS/AFF Shelf Drive Log in to see the answer. Additional Information Add drives to a node or shelf How to Repurpose an HA Pair with ONTAP 9 How to remove a shelf Will changing the physical location of a storage system cause any issues with the warranty or maintenance for the asset
- https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/How_to_replace_an_SSD_on_an_Element_storage_nodeThis article provides the instructions for replacing a Solid State Drive (SSD) in an SF Series or H Series storage node. NetApp H-Series node: Replacing drives for H610S storage nodes NetApp HCI stora...This article provides the instructions for replacing a Solid State Drive (SSD) in an SF Series or H Series storage node. NetApp H-Series node: Replacing drives for H610S storage nodes NetApp HCI storage node: See Replacing a solid-state drive for an H410S storage node for dive layout and Replace drives for storage nodes for drive placement best practices and instructions NetApp SolidFire SF-Series node: How to replace an SSD on NetApp SolidFire SF-Series node
- https://kb.netapp.com/on-prem/ontap/ontap-select/Select-KBs/ONTAP_Select_Deploy__Aggregate_not_reporting_correct_disk_typeRAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- data 0b.3 0b - - SA:A - VMDISK N/A 5060...RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- data 0b.3 0b - - SA:A - VMDISK N/A 5060152/10363193216 5140479/10527703032 data 0b.14 0b - - SA:A - VMDISK N/A 5060152/10363193216 5140479/10527703032 data 0b.7 0b - - SA:A - VMDISK N/A 5159944/10567567232 5241855/10735321080 data 0b.11 0b - - SA:A - VMDISK N/A 5159944/10567567232 5241855/10735321080 data 0b.4 0b - - SA:A - VMDISK N/A 5…
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Multiple_SSDs_failed_in_short_period_of_timeApplies to Solid-State Drive (SSD) AFF/FAS Systems Issue Total of 6 SSDs have failed in this A300 system over the past 3 months ONTAP was generating shm.threshold.spareBlocksConsumedMax message for mu...Applies to Solid-State Drive (SSD) AFF/FAS Systems Issue Total of 6 SSDs have failed in this A300 system over the past 3 months ONTAP was generating shm.threshold.spareBlocksConsumedMax message for multiple SSDs Example: Thu Feb 27 19:08:47 [node-02: disk_health_mon: shm.threshold.spareBlocksConsumedMax:notice]: shm: There are 4 disks that have consumed at least 80 percent of their use-based internal spare capacity. The affected disks are: 0a.10.2 0a.10.8 0a.10.17 0a.10.12.
- https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/Failed_drive_on_Element_clusterAlert for driveFailed on Element cluster in NetApp SolidFire Active IQ and in cluster alerts tab
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Higher_than_expected_SSD_latency_due_to_6Gb_SAS_shelves_and_cablescluster1::> qos statistics volume latency show -vserver svm1 -volume vol1 Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- --...cluster1::> qos statistics volume latency show -vserver svm1 -volume vol1 Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -total- - 4.71ms 185.00us 7.00us 468.00us 4.04ms 0ms 0ms 11.00us vol1-wid32140 32140 22.22ms 86.00us 125.00us 904.00us 21.11ms 0ms 0ms 1.00us -total- - 4.58ms 253.00us 7.00us 991.00us 3.32ms 0ms 0ms 11.00us vol1-wid32140 32140 29.69ms 90.00us 162…
- https://kb.netapp.com/Legacy/NetApp_HCI/OS/Mellanox_switches_SSD_disk_firmware_updateApplies to NVIDIA Mellanox Switches NVIDIA Mellanox Onyx Issue Mellanox SSD disks may go 'Read-Only' or system OS might get corrupted.
- https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/AutoSupport_Message__driveFailed_on_Element_clusterAutoSupport Message: driveFailed on Element cluster in NetApp SolidFire Active IQ and in cluster alerts tab
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/What_happens_when_an_SSD_aggregate_reaches_high_percentage_of_the_total_capacityApplies to AFF systems Comparing to HDD aggregated where we might see performance issue in highly utilized environment with SSD aggregates this shouldn't be a case. This is due to differences in techn...Applies to AFF systems Comparing to HDD aggregated where we might see performance issue in highly utilized environment with SSD aggregates this shouldn't be a case. This is due to differences in technology used to access the data. HDD disks have mechanical parts and different design which has big contribution in system performance SSD disks use a different technology and don't have mechanical parts. Therefore the access to the data is faster and shouldn't be affect by how full is the aggregate
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/What_does_change_Max_P_E_count_to_reduce_failure_rate_mean_in_the_bug_1411698The P/E (Program/Erase) count is a crucial concept when it comes to understanding the lifespan and durability of SSDs (Solid State Drives). The P/E count or cycle is important because each cell in an ...The P/E (Program/Erase) count is a crucial concept when it comes to understanding the lifespan and durability of SSDs (Solid State Drives). The P/E count or cycle is important because each cell in an SSD has a limited number of P/E cycles before it can no longer reliably store data. This number varies depending on the type of flash memory used in the SSD. In the bug 1411698, the goal of reducing SSD failure rates was achieved by improving the method of counting the P/E cycles.