本文介绍了简单使用Solr时如何解决“获取锁超时"的问题?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的 Solr 系统(Solr 版本 3.6.1)有两个内核.当我在我们的专用 Solr 服务器上调用以下命令行来添加并索引文件时:

java -Durl=http://solrprod:8080/solr/original/update -jar/home/solr/solr3/biomina/solr/post.jar/home/solr/tmp/2008/c2m-dump-01.noDEID_clean.

我在 /usr/share/tomcat7/logs/solr.2013-12-11.log 文件中得到一个异常(等待大约 6 分钟后):

严重:org.apache.lucene.store.LockObtainFailedException:锁获取超时:NativeFSLock@/home/solr/solr3/biomina/solr/original/data/index/write.lock

(您可以在此消息的末尾看到它的详细输出).

我尝试修改锁的超时时间(通过将 writeLockTimeout 设置为 300000),但这并没有解决问题.我没有使用任何自定义脚本,只是使用 Solr 3.1.6 附带的 post.jar 来添加和索引.

关于需要更改哪些内容以消除此错误并成功将

/home/solr/solr3/biomina/solr/solr.的内容:

solrconfig.original的核心):

<!-- maxFieldLength 指定每个索引的 *tokens* 的最大数量场地.默认值:10000 --><!-- <maxFieldLength>10000</maxFieldLength>--><!-- 等待 IndexWriter 的写锁的最长时间(毫秒).默认值:1000 --><writeLockTimeout>300000</writeLockTimeout>

solrconfig.deidentified 的核心):

<!-- maxFieldLength 指定每个索引的 *tokens* 的最大数量场地.默认值:10000 --><!-- <maxFieldLength>10000</maxFieldLength>--><!-- 等待 IndexWriter 的写锁的最长时间(毫秒).默认值:1000 --><writeLockTimeout>300000</writeLockTimeout>

异常的详细输出

2013 年 12 月 11 日上午 11:27:25 org.apache.solr.core.SolrCore 执行信息:[原始] webapp=/solr path=/update params={} status=500 QTime=3000702013 年 12 月 11 日上午 11:32:25 org.apache.solr.common.SolrException 日志严重:org.apache.lucene.store.LockObtainFailedException:锁获取超时:NativeFSLock@/home/solr/solr3/biomina/solr/original/data/index/write.lock在 org.apache.lucene.store.Lock.obtain(Lock.java:84)在 org.apache.lucene.index.IndexWriter.(IndexWriter.java:1098)在 org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:84)在 org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:101)在 org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:171)在 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:219)在 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61)在 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115)在 org.apache.solr.handler.

系统详情:

uname -aLinux solrprod 3.0.93-0.8-default #1 SMP Tue Aug 27 08:44:18 UTC 2013 (70ed288) x86_64 x86_64 x86_64 GNU/Linuxjava - 版本Java 版本1.7.0"Java(TM) SE 运行时环境(构建 pxa6470sr6-20131015_01(SR6))IBM J9 VM(构建 2.6,JRE 1.7.0 Linux amd64-64 压缩参考 20131013_170512(启用 JIT,启用 AOT)J9VM - R26_Java726_SR6_20131013_1510_B170512JIT - r11.b05_20131003_47443GC - R26_Java726_SR6_20131013_1510_B170512_CMPRSSJ9CL - 20131013_170512)JCL - 20131011_01 基于 Oracle 7u45-b18
解决方案

以下修改解决了问题:

  • 应用了 https://stackoverflow.com/a/3035916/236007 中描述的更改p>

  • 切换到 Oracle Java 运行时(它是 IBM Java 运行时).

  • ulimit -v unlimited放在/etc/init.d/tomcat7中.

  • 修改/usr/share/tomcat7/bin/setenv.sh文件如下(给它大约4GB内存):

  • 导出 JAVA_OPTS="$JAVA_OPTS -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/data/tomcat_dump"

I have two cores for our Solr system (Solr version 3.6.1). When I invoke the following command line on our dedicated Solr server to add and then index a file:

java -Durl=http://solrprod:8080/solr/original/update -jar /home/solr/solr3/biomina/solr/post.jar /home/solr/tmp/2008/c2m-dump-01.noDEID_clean.

I get an exception in /usr/share/tomcat7/logs/solr.2013-12-11.log file (after about 6 minutes of waiting):

SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/solr/solr3/biomina/solr/original/data/index/write.lock

(You can see the detailed output of it at the end of this message).

I tried to modify the time-out for locks (by setting writeLockTimeout to 300000) , but this did not solve the problem. I'm not using any custom script, just the post.jar that comes with Solr 3.1.6, to add and index.

Any ideas about what needs to be changed to get rid of this error and successfully add the

Contents of /home/solr/solr3/biomina/solr/solr.:

<?

Relevat part of solrconfig.original):

<indexConfig>
    <!-- maxFieldLength specifies max number of *tokens* indexed per
field. Default: 10000 -->
    <!-- <maxFieldLength>10000</maxFieldLength>  -->
    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter.
Default: 1000 -->
    <writeLockTimeout>300000</writeLockTimeout>

Relevat part of solrconfig.deidentified):

<indexConfig>
    <!-- maxFieldLength specifies max number of *tokens* indexed per
field. Default: 10000 -->
    <!-- <maxFieldLength>10000</maxFieldLength>  -->
    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter.
Default: 1000 -->
    <writeLockTimeout>300000</writeLockTimeout>

Detailed Output of Exception

Dec 11, 2013 11:27:25 AM org.apache.solr.core.SolrCore execute
INFO: [original] webapp=/solr path=/update params={} status=500 QTime=300070
Dec 11, 2013 11:32:25 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/solr/solr3/biomina/solr/original/data/index/write.lock
    at org.apache.lucene.store.Lock.obtain(Lock.java:84)
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1098)
    at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:84)
    at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:101)
    at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:171)
    at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:219)
    at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61)
    at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115)
    at org.apache.solr.handler.

System details:

uname -a

Linux solrprod 3.0.93-0.8-default #1 SMP Tue Aug 27 08:44:18 UTC 2013 (70ed288) x86_64 x86_64 x86_64 GNU/Linux

java -version

java version "1.7.0"
Java(TM) SE Runtime Environment (build pxa6470sr6-20131015_01(SR6))
IBM J9 VM (build 2.6, JRE 1.7.0 Linux amd64-64 Compressed References 20131013_170512 (JIT enabled, AOT enabled)
J9VM - R26_Java726_SR6_20131013_1510_B170512
JIT  - r11.b05_20131003_47443
GC   - R26_Java726_SR6_20131013_1510_B170512_CMPRSS
J9CL - 20131013_170512)
JCL - 20131011_01 based on Oracle 7u45-b18
解决方案

The following modifications solved the issue:

  • Applied the changes described at https://stackoverflow.com/a/3035916/236007

  • Switched to Oracle Java runtime (it was IBM Java runtime).

  • Put the ulimit -v unlimited in /etc/init.d/tomcat7.

  • Modified the /usr/share/tomcat7/bin/setenv.sh file as the following (giving it about 4 GB memory):

  • export JAVA_OPTS="$JAVA_OPTS -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/data/tomcat_dump"

这篇关于简单使用Solr时如何解决“获取锁超时"的问题?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-28 16:21