Skip to main content
NetApp Knowledge Base

How long should slice and block drives remain logically out of an Element OS cluster?

Last Updated:

Applies to

  • NetApp SolidFire Storage Nodes
  • NetApp H Series Storage Nodes
  • NetApp Element OS with ONLY the following example fault condition in the Error Logs:



  • While understanding the said title varies, having the slice and block drives logically out of the cluster for unanticipated long periods of time is NOT a best practice approach.
  • The above causes the node to not share the workload of the cluster, therefore, reducing redundancy if another node in the cluster was to fail or goes Offline.
  • If only a single slice drive remains in Available status for a period of time, this causes an extra amount of workload on the cluster as all of the primary iSCSI sessions for the data on the node are being redirected to a different node in the cluster. 
  • If only all block drives remain in Available status for a period of time, the only real impact is just loss of free space.
  • If the node only has this fault condition, we strongly encourage our customers to from the cluster UI, proceed to add the drives back to the cluster to bring the node back to a Healthy status.


NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.
Scan to view the article on your device