Skip to main content
NetApp Knowledge Base

How to manually collect logs from ONTAP 9?

Views:
56,324
Visibility:
Public
Votes:
31
Category:
ontap-9
Specialty:
CORE
Last Updated:

Applies to

  • ONTAP 9
  • Manual log collection

Description

  • AutoSupport collects only the latest log messages written to each log file.
  • Very active logging may trigger log rotation and some logs may not be captured in Autosupport messages. For more details, see ONTAP Log Overview.
  • Due to payload limits, some diagnostic logging is not included in AutoSupport messages.

 NetApp Support may request a manual collection of ALL the files from a Node.  Use the following procedure to complete this request.

The following directories and files are excluded:

  • /mroot/etc/log/stats
  • /mroot/etc/log/packet_traces
  • Any files named corefile that might be found the /mroot/etc/log directory tree

 

Procedure

How to collect logs from a node not in takeover

  1. Run the following command to collect the full logs:

::> set d -c off ; systemshell local "date +%Y%m%d%H%M%S | xargs -I DATE -n1 ngsh -c 'set d -c off ; cluster application-record create -name mroot_bundle_tracker -value DATE -vserver 4294967295'" ; systemshell * "echo `ngsh -c 'set d -c off ; cluster application-record show' | grep mroot | awk '{print $3}'` > /mroot/etc/log/mroot_bundle" ; systemshell -node * -command "sudo find -L /mroot/etc/log -depth -print | egrep -v '\/log\/stats\/|\/log\/packet_traces\/|\/log\/mfg\/|\/corefile' | sudo tar -cLnvzf /mroot/etc/crash/`hostname`_etc-logs.tar.gz -T -" ; cluster application-record delete -name mroot_bundle_tracker ; systemshell local "ngsh -c 'set d -c off ; systemshell -node * rm /mroot/etc/log/mroot_bundle'" ; set admin -c on

Note: The command creates a zipped tar file containing all of the files in the directory  /mroot/etc/crash

  1. From the SPI cluster web interface, download the log bundle to a client

The URL is​​​​ https://<cluster-mgmt-ip>/spi/<node_name>/etc/crash/ 

  1. After downloading the file, delete it from the node. 

The file can be large and needlessly consume root volume space if left.

a.  List and identify the files created:

::> set diag

::*> systemshell -node * -command "ls /mroot/etc/crash/*.tar.gz"  
  
Node: node-1
/mroot/etc/crash/node-1_etc-logs_202505301028.tar.gz
 
Node: node-2
/mroot/etc/crash/node-2_etc-logs_202505301028.tar.gz
2 entries were acted on.

b. Delete the files:

Note: Use caution when using the Systemshell rm command. Incorrect use can cause impact the system. Avoid use of wild characters. 

::*> systemshell -node node-1 -command "rm /mroot/etc/crash/node-1_etc-logs_202505301028.tar.gz"

::*> systemshell -node node-2 -command "rm /mroot/etc/crash/node-2_etc-logs_202505301028.tar.gz"

c. Check that the files have been removed:

::*> systemshell -node * -command "ls /mroot/etc/crash/*.tar.gz"
 
Node: node-1
ls: No match.
 
Node: node-2
ls: No match.
2 entries were acted on.

  1. Once the file is on a local client, upload each file individually to NetApp using one of the suggested methods on this article: How do I upload a file to NetApp? 

Note: Do not zip or tar multiple log bundle files into a single file.  Upload each individually.

How to collect logs from a node in takeover 

  1. Connect to the up node in the HA pair.
  2. Replace Step 1 with the following: 

set d -c off ; systemshell local "sudo mount_partner"; systemshell local "date +%Y%m%d%H%M%S | xargs -I DATE -n1 ngsh -c 'set d -c off ; cluster application-record create -name mroot_bundle_tracker -value DATE -vserver 4294967295'" ; systemshell local -command "sudo find -L /partner/etc/log -depth -print | egrep -v '\/log\/stats\/|\/log\/packet_traces\/|\/log\/mfg\/|\/corefile' | sudo tar -cLnvzf /mroot/etc/crash/partner_etc-logs_`ngsh -c 'set d -c off ; cluster application-record show' | grep mroot | awk '{print $3}'`.tar.gz -T -" ; cluster application-record delete -name mroot_bundle_tracker ; systemshell local "sudo umount /partner"; set admin -c on

Note: File name format is partner_etc-logs_202506021448.tar.gz without the node name.  Rename if needed.

  1. Continue to Step 2 in the previous section.

Additional Information

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.