What are the performance impacts of changing the size of maxdirsize?
Applies to
- ONTAP 9
- Cloud Volumes ONTAP
- Data ONTAP 8
- Directory Size
Answer
Performance issues are hard to quantify. When a single directory contains many files, lookups (such as in a find operation) can consume large amounts of CPU and memory.
- Lookups in a large directory consume lots of CPU and memory.
- Converting a large directory from NFS only to CIFS takes lots of resources over a long period of time.
- When a directory is loaded into memory, the entire directory tree is loaded.
- Parts of it may fall out of memory through non-use, but there is a performance impact from reading from disk and finding space in-memory for the directory to be stored.
Starting in ONTAP 9.2, directory indexing creates an index file for directory sizes exceeding 2MB to help offset the need to perform so many lookups and avoid cache misses.
- Usually, this helps large directory performance. However, for wildcard searches and readdir operations, indexing is not of much use.
- When possible, use the latest version of ONTAP for high file count environments to gain benefits from WAFL improvements.
Note: Values for maxdirsize are hard coded not to exceed 4GB. To avoid performance issues, NetApp recommends setting maxdirsize values no higher than 1GB.