vol0 inode usage unexpected high and growing permanently
Applies to
- ONTAP 9.13.1+ without fix for CONTAP-115704.
- SSH public key authentication in use, likely with very frequent SSH sessions from automation scripts.
- Node root volume (vol0) inode exhaustion or permanent increase.
- Node /tmp/ directory capacity exhaustion.
Issue
- High inode count caused by files matching  /mroot/etc/cluster_config/vserver/.vserver_*/config/auth.[0-9,a-z,A-Z]+.
- CONTAP-115704 (legacy id 1595173) lists different symptoms, one of them is continuous increase of vol0 inode usage.
- In the following example, node MyCluster-02 is not affected (was recently cleaned up) but node MyCluster-01 is heavily affected with >12 million inodes:
MyCluster::*> node run -node * -command df -i vol0
2 entries were acted on.
Node: MyCluster-01
Filesystem               iused      ifree  %iused  Mounted on
/vol/vol0/            12330575    8920551     58%  /vol/vol0/
Node: MyCluster-02
Filesystem               iused      ifree  %iused  Mounted on
/vol/vol0/              294533   20956593      1%  /vol/vol0/
Note: Usually both nodes are affected, the magnitude depends majorly on LIF distribution across nodes for LIFs that are used for SSH access.
- Unexpected high inode count in root volume, can lead to unforseeable issues in different subsystems and processes.
