- ONTAP 9
- AFF models
- FAS models
- Excluding MetroCluster
Several events might require a graceful shutdown of ONTAP equipment such as:
- Scheduled site power outage
- Data center wide maintenance
- Physical system move
- Preparation for future re-purposing of equipment
Prior to shutdown
Identifying hardware components
The ONTAP system is comprised of one or more of the following components. Use the following links for helpful details and images to assist onsite personnel with locating and identifying the equipment.
|Important: This procedure will shut down all nodes within the cluster and will make access to data on the cluster unavailable until the system is powered back up.|
- Login to cluster via SSH. Otherwise, login from any node in the cluster using a local console cable.
- Generate a case suppression AutoSupport for the expected duration of the shutdown event and any descriptive text:
cluster1::> system node autosupport invoke -node * -type all -message "MAINT=8h Power Maintenance"
- Identify SP/BMC IP address of all nodes:
cluster1::> system service-processor show -node * -fields address
- Exit clustershell:
- Connect to SP/BMC over SSH using the IP address of any node from step #3. Otherwise, connect a local console cable to the node. Login using the same cluster administrator credentials.
If accessing via the SP/BMC prompt, switch to
system console and supply the cluster administrator credentials:
login as: admin
SP cluster1-01> system console
Type Ctrl-D to exit.
Note: Open an SSH session window to every SP/BMC for monitoring as described in this step.
- Halt all the nodes in the cluster:
For most cluster configurations:
cluster1::> system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true
For clusters with SnapMirror Synchronous operating in StrictSync mode:
cluster1::> system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-tak
eover true -ignore-strict-sync-warnings true
- Respond to the prompt for each node:
Warning: Are you sure you want to halt node "cluster1-01"?
Warning: Are you sure you want to halt node "cluster1-02"?
Warning: Are you sure you want to halt node "cluster1-03"?
Warning: Are you sure you want to halt node "cluster1-04"?
4 entries were acted on.
- Wait for each node to halt completely by reaching the LOADER prompt:
- Connect to each node in the cluster via SP/BMC (if not already connected) or using a local console cable and confirm each node is at the LOADER prompt (as in Step 8).
|The physical activity here ensures no equipment damage occurs while the system is powered down and ensures the correct order of equipment startup is followed, so that the ONTAP system is prepared to serve data after the event is complete.|
- Toggle each PSU rocker switch to the off position on each piece of equipment.
Note: Some PSUs do not have rocker switches.
- Remove the power cable connecting each PSU to the PDU.
- Visually confirm each component has successfully powered off.
- Ensure that all controllers, disk shelves, and switches associated with the cluster are powered down.
When the system is ready to be powered back up following the steps in How to power up all ONTAP nodes in a cluster following a graceful shutdown
To power off a controller remotely:
SP> system power off This will cause a dirty shutdown of your appliance. Continue? [y/n] y SP> system power status Chassis Power is off
The warning can be ignored only after a clean shutdown and the node is at the LOADER prompt. Any other use can cause loss of data.
Repeat from other SP in the same chassis (where applicable).