Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 7 results
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/Multiple_storage_nodes_shutdown_simultaneously
      2023-10-14T07:30:46.920105Z node-03 sfsvcmgr[18170]: [APP-5] [Process] 18180 signal_handler sfsvcmgr/sfsvcmgr.cpp:648:StopAllProcesses| Stopping serviceID: 20 PID: 25768 2023-10-14T07:30:46.920360Z no...2023-10-14T07:30:46.920105Z node-03 sfsvcmgr[18170]: [APP-5] [Process] 18180 signal_handler sfsvcmgr/sfsvcmgr.cpp:648:StopAllProcesses| Stopping serviceID: 20 PID: 25768 2023-10-14T07:30:46.920360Z node-03 sfsvcmgr[18170]: [APP-5] [Process] 18180 signal_handler sfsvcmgr/sfsvcmgr.cpp:648:StopAllProcesses| Stopping serviceID: 47 PID: 25756
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/AutoSupport_Message__The_SolidFire_Application_cannot_communicate_with_node_ID
      KB provides the solution and information for Alert on AutoSupport: SolidFire Alert from <cluster_name> (Node Offline) Node Offline nodeID=<#>
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/Adding_a_VLAN_tag_to_the_storage_interface_resulted_in_nodeOffline
      Applies to NetApp Element software 9.0 and later Issue NodeOffline occurred due to network connectivity issues. 2022-03-29T16:27:54.649Z Error node ID # Cluster_name 0 Yes 2022-03-29T16:36:17.686Z nod...Applies to NetApp Element software 9.0 and later Issue NodeOffline occurred due to network connectivity issues. 2022-03-29T16:27:54.649Z Error node ID # Cluster_name 0 Yes 2022-03-29T16:36:17.686Z nodeOffline The SolidFire Application cannot communicate with Storage node having node ID #.
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/SolidFire_Event_Reports_Updating_BMC_cold_reset_date
      Applies to NetApp SolidFire Storage Nodes NetApp H Series Storage Nodes Issue ASUPP_PP generates a NetApp support case reporting the following: SFCOMM:SolidFire Alert from (ClusterName) (Cluster ID #)...Applies to NetApp SolidFire Storage Nodes NetApp H Series Storage Nodes Issue ASUPP_PP generates a NetApp support case reporting the following: SFCOMM:SolidFire Alert from (ClusterName) (Cluster ID #) (Node Offline) Node Offline nodeID=x. When reviewing the Error Log, the node(s) error is Resolved automatically in over (10+) minutes and the state of the node is again Healthy. The event logs will report the following:
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/How_to_verify_a_SolidFire_node_is_not_hosting_any_volumes
      Applies to SolidFire Storage Node NetApp HCI Storage Node Description It is possible for a node to report as "nodeOffline" and even have all of its disks "failed", but still be serving volumes to clie...Applies to SolidFire Storage Node NetApp HCI Storage Node Description It is possible for a node to report as "nodeOffline" and even have all of its disks "failed", but still be serving volumes to clients/hosts. This KB will show how to ensure no volumes are being hosted on a node before maintenance is performed.
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/NodeOffline_alerts_after_maintenance_event_or_environmental_power_outage
      There are multiple errors in NetApp SolidFire Active IQ after maintenance event or environmental power outage like node offline, SolidFire Application cannot communicate with a metadata service or wit...There are multiple errors in NetApp SolidFire Active IQ after maintenance event or environmental power outage like node offline, SolidFire Application cannot communicate with a metadata service or with node ID etc
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/Storage_Node_Reports_Error_Drive_Failed_xDrive_s_With_State__NodeOffline
      Error Code: nodeOffline | Details: The SolidFire Application cannot communicate with the Storage node having node ID xx. The drive(s) marked as Failed by the Cluster Master node because it could no lo...Error Code: nodeOffline | Details: The SolidFire Application cannot communicate with the Storage node having node ID xx. The drive(s) marked as Failed by the Cluster Master node because it could no longer communicate with the drives due to the stopped block services. If the block services are not recovered within 5 1/2 minutes, the drives will automatically sync out and NetApp support should be contacted to help determine if the drives can be re-added back into the node's configuration.