Skip to main content
NetApp Knowledgebase

The Difference Between storePool and Lock Manager Objects

Last Updated:

Applies to

  • storePool
  • ONTAP 9


  • What is a storePool? 
    • The storePool is a collection of memory allocations (pools) that are used to facilitate NFSv4/4.1 state work.
      • There are 12 different storepool pairs, which encompass a variety of areas.
StorePool Names Description
storePool_ByteLockAlloc Byte-range locks
storePool_ClientAlloc   ClientIDs
storePool_CopyStateAlloc Used for copy offload related work
storePool_DelegAlloc        Delegations for reads and writes (client caching)
storePool_DelegStateAlloc   Delegation StateID
storePool_LayoutAlloc       Layouts (used for pNFS referrals)
storePool_LayoutStateAlloc  Layout StateID
storePool_LockStateAlloc    Lock StateID
storePool_OpenAlloc         Opens
storePool_OpenStateAlloc    Open StateID
storePool_OwnerAlloc        Owners
storePool_StringAlloc       Stores opaque data sent for ClientID and Owner
  • Where are storePool resources located?
    • Each node has its own unique storePool resources. These storePool resources are stored in the Network blade (nblade) of each node. They are utilized by clients that are mounting LIFs local to the node.
  • How do storePool resources differ from lock manager objects? 
    • The storePool resources are specifically for doing NFSv4/4.1 state work, they do not keep track of open/locked files at the file system layer (dblade). Lock manager objects are responsible for the file system open/lock tracking, including resolution of open/lock conflicts.
  • How does storePool exhaustion occur differ from lock manager object exhaustion?
    • Exhaustion of the storePool occurs at the Network layer (nblade) of the node where the LIF is mounted. This is an exhaustion of the NFSv4/4.1 objects for tracking state. Lock manager exhaustion is exhaustion of the objects responsible for opening/locking files at the file system layer (Data blade or dblade). Lock manager exhaustion would affect the volumes located on the node where exhaustion occurred.
  • Monitoring storePool consumption:
    • storePool objects can be monitored with statistics counters and alert warnings when a specific "pool" becomes full. The pool allocation numbers are dynamic and should be changing constantly as files are opened and closed by clients. NetApp has the capability to manually retrieve the counters via command which could be leveraged via scripting for alerting. In ONTAP 9.2 and later, an EMS event exists to alert when a storePool resource reaches 80% of its maximum threshold.
  • How can specific clients potentially cause problems?
    • In certain circumstances, clients do not close their OPENs in a way that the filer is expecting. When this occurs, the client is unaware that it still has that OPEN allocated. In this case, the server will not remove the OpenState object and the resource is never returned to the pool. If this behavior continues, storePool exhaustion can occur as the client behavior orphans resources on the server. Dumping nfsv4 locks can show which client is taking up all of the allocations in the storePool. Once this client is restarted, ONTAP frees associated storePool resources associated with that client.
  • A client mounts a LIF on node A and a LIF on node B via NFSv4. Both of these mounts access a volume on node A. Exhaustion of storePool resource occurs on Node A. Which mount will be affected?
    • Only connections on Node A will be affected. These allocations are on the "networking" layer (nblade) of ONTAP which is the node that is being used for connections. Where the volumes are irrelevant to the issue.

Additional Information