Skip to main content
NetApp Knowledge Base

ElasticSearch indexing failures cause performance issues in OnCommand Insight

Views:
1,468
Visibility:
Public
Votes:
1
Category:
oncommand-insight
Specialty:
oci
Last Updated:

Applies to

  • OnCommand Insight (OCI) 7.x

Issue

Issues that may be related to failed indexing in ElasticSearch are listed below. These may be seen together or separately.

  • Slow WebUI searches
  • High CPU usage on the OCI server
  • High Memory usage on the OCI server
  • File system near full/full on the OCI server
  • 404 Page Not Found error message when trying to access the WebUI
  • Datasources failing with error "Internal error: Cannot update server (Server returned HTTP response code: 500 for URL"

In the server_indexer.log (located in \SANscreen\wildfly\standalone\log) you may see failed index messages similar to:

2018-01-15 20:35:00,436 ERROR [EJB default - 7] QueryIndexBuilder (QueryIndexBuilder.java:195) - Error indexing InternalVolume
 com.netapp.oci.es.ElasticsearchException: [bulkIndexById][INDEX_NAME:oci-inventory-internalvolume-2018-01-15-20-33] [ANOMALY_INDEX_TYPE: internalvolume] [IDS: [7864440,
 
 Caused by: com.netapp.oci.es.ElasticsearchException: Error bulk indexing data. failure in bulk
execution:
 [19439]: index [oci-inventory-internalvolume-2018-01-23-07-56], type [internalvolume], id [11609
24], message [java.lang.IllegalArgumentException: Document contains at least one immense term
in field="annotationValues.text.Automounter Path" (whose UTF8 encoding is longer than the max l
ength 32766), all of which were skipped. Please correct the analyzer to not produce such terms. T
he prefix of the first immense term is: '[47, 105, 99, 101, 110, 103, 47, 100, 102, 109, 47, 114, 103,
98, 44, 32, 47, 105, 99, 101, 110, 103, 47, 100, 102, 109, 47, 98, 108, 97]...', original message: by
tes can be at most 32766 in length; got 37802]
 

In server.log (located in \SANscreen\wildfly\standalone\log) you may see failed messages similar to:

2019-06-18 07:11:26,296 ERROR [default task-33057] OriginatorWorkingSet(OriginatorWorkingSet.java:488) - <datasource> failed on submit, due to: Errors encountered while executing delete extendedData documents. failure in bulk execution:
[0]: index [oci-extended-data-v2], type [_doc], id [1871197], message [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]
[1]: index [oci-extended-data-v2], type [_doc], id [3310508], message [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]
[2]: index [oci-extended-data-v2], type [_doc], id [1567170], message [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]
[3]: index [oci-extended-data-v2], type [_doc], id [4478203], message [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]
[4]: index [oci-extended-data-v2], type [_doc], id [4311330], message [ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]]

 

In the oci-cluster.log (located in \SANscreen\program_data\elasticsearch\logs) you may see failed messages similar to:

[2018-03-26T17:55:13,054][WARN ][o.e.c.a.s.ShardStateAction] [<datasource>] [oci-timeseries-qtree-2018-03-11][1] received shard failed for shard id [[oci-timeseries-qtree-2018-03-11][1]], allocation id [sSKYeTqNR46jrWY-rGw1XA], primary term [0], message [failed recovery], failure [RecoveryFailedException[[oci-timeseries-qtree-2018-03-11][1]: Recovery failed on {XXXXXXXX}{6HO5LMOIRYCEAWKINLSCIQ {TB2VITKTREOZQLBQLXWMIG}{127.0.0.1}{127.0.0.1:9310}]; nested: IndexShardRecoveryException[failed recovery]; nested: IllegalStateException[pre-1.4 translog found [F:\NetApp\SANscreen\program_data\elasticsearch\data\nodes\0\indices\KeESoUKHROaLhn6_WseZVQ\1\translog\translog-1.tlog]]; ] org.elasticsearch.indices.recovery.RecoveryFailedException: [oci-timeseries-qtree-2018-03-11][1]: Recovery failed on {XXXXXXXX}{6HO5LMOIRYCEAWKINLSCIQ}{TB2VITKTREOZQLBQLXWMIG}{127.0.0.1}{127.0.0.1:9310}

Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed recovery
 at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:299) ~[elasticsearch-5.4.2.jar:5.4.2]
 at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.4.2.jar:5.4.2]
 at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1238) ~[elasticsearch-5.4.2.jar:5.4.2]
 at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$1(IndexShard.java:1486) ~[elasticsearch-5.4.2.jar:5.4.2]
 ... 4 more
Caused by: java.lang.IllegalStateException: pre-1.4 translog found [F:\NetApp\SANscreen\program_data\elasticsearch\data\nodes\0\indices\KeESoUKHROaLhn6_WseZVQ\1\translog\translog-1.tlog]

 

 

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.