Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 6 results
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/SolidFire_and_HCI_upgrades_unable_to_start_due_to_upgrade_package_upload_error
      Upload of the upgrade package fails thus preventing SolidFire and HCI upgrades to start.
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/SolidFire_power_supply_input_missing_or_out_of_range_or_node_offline_due_to_power_supply_being_incorrectly_configured
      A 'Power supply PS$ supply input missing or out of range' alert on a SolidFire storage system can be caused by the power supply showing as 'incorrectly configured' in the iDRAC log. If both power supp...A 'Power supply PS$ supply input missing or out of range' alert on a SolidFire storage system can be caused by the power supply showing as 'incorrectly configured' in the iDRAC log. If both power supplies are showing as 'incorrectly configured,' then the node will show a 'Node Offline' alert. The SolidFire Element OS UI "Alerts" reports one or more of the following: Power supply PS1 input missing or out of range. Power supply PS2 input missing or out of range.
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/Ping_loss_towards_out-of-band_interface_on_Element
      Applies to NetApp SolidFire SF-Series (all models starting with SF) Not applicable to NetApp H-series Out-of-band interface (BMC/iDRAC) Firmware revision of the iDRAC is 1.57.57 Issue SNMP tools are r...Applies to NetApp SolidFire SF-Series (all models starting with SF) Not applicable to NetApp H-series Out-of-band interface (BMC/iDRAC) Firmware revision of the iDRAC is 1.57.57 Issue SNMP tools are reporting intermittent ping drops towards the out-of-band interface Manual ping tests towards the out-of-band interface are showing drops
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/NetApp_SolidFire_nodes_with_The_system_board_PS_X_PG_Fail_voltage_is_outside_of_range_alert
      Applies to SF38410 Nodes SF19210 Nodes SF9605 Nodes SF4805 Nodes Issue At the Event Logs at the BMC/iDRAC of the node, there are repetitive alerts being generated. These alerts are cosmetic and can be...Applies to SF38410 Nodes SF19210 Nodes SF9605 Nodes SF4805 Nodes Issue At the Event Logs at the BMC/iDRAC of the node, there are repetitive alerts being generated. These alerts are cosmetic and can be ignored. Example screenshots of the output can be seen in the Additional Information section of this article. For further investigation, please contact NetApp Technical Support and refer to this article.
    • https://kb.netapp.com/on-prem/solidfire/hardware_Kbs/IPMI_unresponsive_and_nodeOffline_repeatedly_after_the_periodic_BMC_cold_reset
      Beginning BMC cold reset and setting new reset date master-1[30228]: [Event] 30325 GlobalPool-0 serviceshared/EventReporter.cpp:582:ReportEvent|Successfully reported event={id=569216 type=PlatformHard...Beginning BMC cold reset and setting new reset date master-1[30228]: [Event] 30325 GlobalPool-0 serviceshared/EventReporter.cpp:582:ReportEvent|Successfully reported event={id=569216 type=PlatformHardwareEvent nodeID=6 serviceID=107 message=[Beginning BMC cold reset and setting new reset date] details={"bmcResetDate":"2021-09-02T12:49:41","bmcResetDurationMinutes":20160} reported=2021-08-19T12:49:41.644056Z published=2021-08-19T12:49:41.644104Z} mNumEventsPublished=21
    • https://kb.netapp.com/on-prem/solidfire/Element_OS_Kbs/All_drives_in_failed_state_with_UnlockFailedNoKey_after_adding_node_to_cluster
      Node added to the cluster with API /json-rpc/8.2?method=AddNodes&pendingNodes=[x]&autoInstall=false After node is being added to the cluster: Error in NetApp SolidFire Active IQ Details: 10 drive(s) w...Node added to the cluster with API /json-rpc/8.2?method=AddNodes&pendingNodes=[x]&autoInstall=false After node is being added to the cluster: Error in NetApp SolidFire Active IQ Details: 10 drive(s) with state: "UnlockFailedNoKey" driveID: <list of driveIDs>. All drives showing capacity of 0 GB under the (failed) drive list in Active IQ In kern.log the drives are showing I/O errors on sector 0 When removing the failed drives from the cluster, they re-appear after refreshing the page