Skip to main content
NetApp Knowledge Base

CONTAP-586796: Flexcache cache locks are not getting freed

Views:
Visibility:
Public
Votes:
0
Category:
ontap-9
Specialty:
nas
Last Updated:

Issue

  • Buildup of lmgr locks until the limit is hit on a Flexcache cache volume : 

[NODE-01: wafl_exempt01: lmgr.lock.type.counts:notice]: The number of lock manager object types in pool NODE-01 are: share locks 1235301 byte locks 0 waiting locks 0 callback waiting 0 pfs (clustered Data ONTAP FlexCache delegations) 0 cached locks 0 WAN FlexCache delegations 0
[NODE-01: wafl_exempt10: ems.engine.suppressed:debug]: Event 'lmgr.owners.threshold.hit' suppressed 828732 times in last 901 seconds.
[NODE-01: wafl_exempt10: lmgr.owners.threshold.hit:error]: The current number of owner objects 1236210 with maximum limit of 1500000 has reached the threshold limit percentage of 80 for pool "NODE-01". The current number/limit of other lock manager objects are: files 276270/1500000, hosts 28/100000 and locks 1236402/3000000.
[NODE-01: wafl_exempt07: ems.engine.suppressed:debug]: Event 'lmgr.lock.type.counts' suppressed 217778 times in last 301 seconds.
[NODE-01: wafl_exempt07: lmgr.lock.type.counts:notice]: The number of lock manager object types in pool NODE-01 are: share locks 1236307 byte locks 0 waiting locks 0 callback waiting 0 pfs (clustered Data ONTAP FlexCache delegations) 0 cached locks 0 WAN FlexCache delegations 0
[NODE-01: wafl_exempt01: ems.engine.suppressed:debug]: Event 'lmgr.lock.type.counts' suppressed 201256 times in last 301 seconds.
[NODE-01: wafl_exempt01: lmgr.lock.type.counts:notice]: The number of lock manager object types in pool NODE-01 are: share locks 1237460 byte locks 0 waiting locks 0 callback waiting 0 pfs (clustered Data ONTAP FlexCache delegations) 0 cached locks 0 WAN FlexCache delegations 0
[NODE-01: wafl_exempt03: ems.engine.suppressed:debug]: Event 'lmgr.lock.type.counts' suppressed 215414 times in last 301 seconds.
[NODE-01: wafl_exempt03: lmgr.lock.type.counts:notice]: The number of lock manager object types in pool NODE-01 are: share locks 1238896 byte locks 0 waiting locks 0 callback waiting 0 pfs (clustered Data ONTAP FlexCache delegations) 0 cached locks 0 WAN FlexCache delegations 0
[NODE-01: wafl_exempt00: ems.engine.suppressed:debug]: Event 'lmgr.owners.threshold.hit' suppressed 636347 times in last 901 seconds.
[NODE-01: wafl_exempt00: lmgr.owners.threshold.hit:error]: The current number of owner objects 1240908 with maximum limit of 1500000 has reached the threshold limit percentage of 80 for pool "NODE-01". The current number/limit of other lock manager objects are: files 276559/1500000, hosts 28/100000 and locks 1241082/3000000.
[NODE-01: wafl_exempt10: lmgr.owners.limit.hit:error]: The number of owner objects 1500000 has reached the limit 1500000 for the pool NODE-01. The current number/limit of other lock manager objects are: files 305835/1500000 hosts 28/100000, locks 1500007/3000000.
[NODE-01: wafl_exempt02: ems.engine.suppressed:debug]: Event 'lmgr.lock.type.counts' suppressed 48798 times in last 301 seconds.
[NODE-01: wafl_exempt02: lmgr.lock.type.counts:notice]: The number of lock manager object types in pool NODE-01 are: share locks 1500007 byte locks 0 waiting locks 0 callback waiting 0 pfs (clustered Data ONTAP FlexCache delegations) 0 cached locks 0 WAN FlexCache delegations 0
[NODE-01: wafl_exempt07: ems.engine.suppressed:debug]: Event 'lmgr.lock.type.counts' suppressed 53154 times in last 301 seconds.

  • Other errors observed: 

 
[NODE-01: kernel: Nblade.streamMsg:debug]: params: {'errorCode': '7301', 'fileName': 'src/Protocols/Cifs/SmbRequests/Smb2CloseCmd.cpp', 'lineNumber': '999'}
[NODE-01: kernel: Nblade.streamMsg:debug]: params: {'errorCode': '7301', 'fileName': 'src/Protocols/Cifs/SmbRequests/Smb2CloseCmd.cpp', 'lineNumber': '999'}


  • Affected volumes are multi constituent Flexcache volumes: 

::*> vol show -is-constituent true
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm1      cachefg1__0001 cluster1_01_SSD_1 online RW    6.25GB     5.33GB   14%
svm1      cachefg1__0002 cluster1_02_SSD_1 online RW    6.25GB     5.33GB   14%
svm1      cachefg1__0003 cluster1_01_SSD_1 online RW    6.25GB     5.33GB   14%
svm1      cachefg1__0004 cluster1_02_SSD_1 online RW    6.25GB     5.30GB   15%
svm1      cachefg1__0005 cluster1_01_SSD_1 online RW    6.25GB     5.30GB   15%
svm1      cachefg1__0006 cluster1_02_SSD_1 online RW    6.25GB     5.33GB   14%
svm1      cachefg1__0007 cluster1_01_SSD_1 online RW    6.25GB     5.30GB   15%
svm1      cachefg1__0008 cluster1_02_SSD_1 online RW    6.25GB     5.33GB   14%
svm1      cachefg__0001 cluster1_02_SSD_1 online RW       50GB    49.50GB    1%
svm1      cachegroup__0001 cluster1_01_SSD_1 online RW  6.67GB     5.36GB   19%
svm1      cachegroup__0002 cluster1_02_SSD_1 online RW  6.67GB     5.32GB   20%
svm1      cachegroup__0003 cluster1_02_SSD_1 online RW  6.67GB     5.32GB   20%
svm1      cachegroup__0004 cluster1_01_SSD_1 online RW  6.67GB     5.38GB   19%
svm1      cachegroup__0005 cluster1_02_SSD_1 online RW  6.67GB     5.32GB   20%
svm1      cachegroup__0006 cluster1_01_SSD_1 online RW  6.67GB     5.34GB   19%

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.