SolrException log

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

SolrException log

Bastian S.
Hi,

we are using solr 1.4.1 in a master-slave setup with replication,
requests are loadbalanced to both instances. this is just working fine,
but the slave
behaves strange sometimes with a "SolrException log" (trace below). We
are using 1.4.1 for weeks now, and this has happened only a few times
so far, and it only occured on the Slave. The Problem seemed to be gone
when we added a cron-job to send a periodic <optimize/> (once a day)
to the master, but today it did happen again. The Index contains 55
files right now, after optimize there are only 10. So it seems its a
problem when
the index is spread among a lot files. The Slave wont ever recover once
this Exception shows up, the only thing that helps is a restart.

Is this a known issue? Only workaround would be to track the
commit-counts and send additional <optimize/> requests after a certain
amount of
commits, but id prefer solving this problem rather than building a
workaround..

Any hints/thoughts on this issue are verry much appreciated, thanks in
advance for your help.

cheers Bastian.

Aug 11, 2010 4:51:58 PM org.apache.solr.core.SolrCore execute
INFO: [] webapp=/solr path=/select
params={fl=media_id,keyword_1004&sort=priority_1000+desc,+score+desc&ind
ent=off&start=0&q=mandant_id:1000+AND+partner_id:1000+AND+active_1000:tr
ue+AND+cat_id_path_1000:7231/7258*+AND+language_id:1004&rows=24&version=
2.2} status=500 QTime=2
Aug 11, 2010 4:51:58 PM org.apache.solr.common.SolrException log
SEVERE: java.io.IOException: read past EOF
        at
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.jav
a:151)
        at
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.j
ava:38)
        at
org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
        at
org.apache.lucene.index.SegmentTermDocs.next(SegmentTermDocs.java:112)
        at
org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCacheI
mpl.java:461)
        at
org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:22
4)
        at
org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
        at
org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCacheI
mpl.java:445)
        at
org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:22
4)
        at
org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
        at
org.apache.lucene.search.FieldComparator$IntComparator.setNextReader(Fie
ldComparator.java:332)
        at
org.apache.lucene.search.TopFieldCollector$MultiComparatorNonScoringColl
ector.setNextReader(TopFieldCollector.java:435)
        at
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:249)
        at org.apache.lucene.search.Searcher.search(Searcher.java:171)
        at
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.
java:988)
        at
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.j
ava:884)
        at
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:3
41)
        at
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.
java:182)
        at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(Search
Handler.java:195)
        at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerB
ase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
        at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.ja
va:338)
        at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.j
ava:241)
        at
org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter(Web
ApplicationHandler.java:821)
        at
org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicationH
andler.java:471)
        at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:568)
        at org.mortbay.http.HttpContext.handle(HttpContext.java:1530)
        at
org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationCon
text.java:633)
        at org.mortbay.http.HttpContext.handle(HttpContext.java:1482)
        at org.mortbay.http.HttpServer.service(HttpServer.java:909)
        at
org.mortbay.http.HttpConnection.service(HttpConnection.java:820)
        at
org.mortbay.http.ajp.AJP13Connection.handleNext(AJP13Connection.java:295
)
        at
org.mortbay.http.HttpConnection.handle(HttpConnection.java:837)
        at
org.mortbay.http.ajp.AJP13Listener.handleConnection(AJP13Listener.java:2
12)
        at
org.mortbay.util.ThreadedServer.handle(ThreadedServer.java:357)
        at
org.mortbay.util.ThreadPool$PoolThread.run(ThreadPool.java:534)
Reply | Threaded
Open this post in threaded view
|

Re: SolrException log

Tommaso Teofili
Hi Bastian,
this seems to be related to IO and file deletion (optimization compacts and
removes index files), are you running Solr on NFS or a distributed file
system?
You could set a propert IndexDeletionPolicy (SolrDeletionPolicy) in
solrconfig.xml to handle this.
My 2 cents,
Tommaso

2010/8/11 Bastian Spitzer <[hidden email]>

> Hi,
>
> we are using solr 1.4.1 in a master-slave setup with replication,
> requests are loadbalanced to both instances. this is just working fine,
> but the slave
> behaves strange sometimes with a "SolrException log" (trace below). We
> are using 1.4.1 for weeks now, and this has happened only a few times
> so far, and it only occured on the Slave. The Problem seemed to be gone
> when we added a cron-job to send a periodic <optimize/> (once a day)
> to the master, but today it did happen again. The Index contains 55
> files right now, after optimize there are only 10. So it seems its a
> problem when
> the index is spread among a lot files. The Slave wont ever recover once
> this Exception shows up, the only thing that helps is a restart.
>
> Is this a known issue? Only workaround would be to track the
> commit-counts and send additional <optimize/> requests after a certain
> amount of
> commits, but id prefer solving this problem rather than building a
> workaround..
>
> Any hints/thoughts on this issue are verry much appreciated, thanks in
> advance for your help.
>
> cheers Bastian.
>
> Aug 11, 2010 4:51:58 PM org.apache.solr.core.SolrCore execute
> INFO: [] webapp=/solr path=/select
> params={fl=media_id,keyword_1004&sort=priority_1000+desc,+score+desc&ind
> ent=off&start=0&q=mandant_id:1000+AND+partner_id:1000+AND+active_1000:tr
> ue+AND+cat_id_path_1000:7231/7258*+AND+language_id:1004&rows=24&version=
> 2.2} status=500 QTime=2
> Aug 11, 2010 4:51:58 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.IOException: read past EOF
>        at
> org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.jav
> a:151)
>        at
> org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.j
> ava:38)
>        at
> org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
>        at
> org.apache.lucene.index.SegmentTermDocs.next(SegmentTermDocs.java:112)
>        at
> org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCacheI
> mpl.java:461)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:22
> 4)
>        at
> org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
>        at
> org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCacheI
> mpl.java:445)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:22
> 4)
>        at
> org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
>        at
> org.apache.lucene.search.FieldComparator$IntComparator.setNextReader(Fie
> ldComparator.java:332)
>        at
> org.apache.lucene.search.TopFieldCollector$MultiComparatorNonScoringColl
> ector.setNextReader(TopFieldCollector.java:435)
>        at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:249)
>        at org.apache.lucene.search.Searcher.search(Searcher.java:171)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.
> java:988)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.j
> ava:884)
>        at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:3
> 41)
>        at
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.
> java:182)
>        at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Search
> Handler.java:195)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerB
> ase.java:131)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.ja
> va:338)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.j
> ava:241)
>        at
> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter(Web
> ApplicationHandler.java:821)
>        at
> org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicationH
> andler.java:471)
>        at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:568)
>        at org.mortbay.http.HttpContext.handle(HttpContext.java:1530)
>        at
> org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationCon
> text.java:633)
>        at org.mortbay.http.HttpContext.handle(HttpContext.java:1482)
>        at org.mortbay.http.HttpServer.service(HttpServer.java:909)
>        at
> org.mortbay.http.HttpConnection.service(HttpConnection.java:820)
>        at
> org.mortbay.http.ajp.AJP13Connection.handleNext(AJP13Connection.java:295
> )
>        at
> org.mortbay.http.HttpConnection.handle(HttpConnection.java:837)
>        at
> org.mortbay.http.ajp.AJP13Listener.handleConnection(AJP13Listener.java:2
> 12)
>        at
> org.mortbay.util.ThreadedServer.handle(ThreadedServer.java:357)
>        at
> org.mortbay.util.ThreadPool$PoolThread.run(ThreadPool.java:534)
>
Reply | Threaded
Open this post in threaded view
|

Re: SolrException log

Bastian S.
Hi Tommaso,

Thanks for your Reply. The Solr Files are on local disk, on a reiserfs. I'll try to set a Deletion Policy and report back if
that solved the problem, thank you for the hint.

cheers,
Bastian

-----Ursprüngliche Nachricht-----
Von: Tommaso Teofili [mailto:[hidden email]]
Gesendet: Montag, 23. August 2010 15:31
An: [hidden email]
Betreff: Re: SolrException log

Hi Bastian,
this seems to be related to IO and file deletion (optimization compacts and removes index files), are you running Solr on NFS or a distributed file system?
You could set a propert IndexDeletionPolicy (SolrDeletionPolicy) in solrconfig.xml to handle this.
My 2 cents,
Tommaso

2010/8/11 Bastian Spitzer <[hidden email]>

> Hi,
>
> we are using solr 1.4.1 in a master-slave setup with replication,
> requests are loadbalanced to both instances. this is just working
> fine, but the slave behaves strange sometimes with a "SolrException
> log" (trace below). We are using 1.4.1 for weeks now, and this has
> happened only a few times so far, and it only occured on the Slave.
> The Problem seemed to be gone when we added a cron-job to send a
> periodic <optimize/> (once a day) to the master, but today it did
> happen again. The Index contains 55 files right now, after optimize
> there are only 10. So it seems its a problem when the index is spread
> among a lot files. The Slave wont ever recover once this Exception
> shows up, the only thing that helps is a restart.
>
> Is this a known issue? Only workaround would be to track the
> commit-counts and send additional <optimize/> requests after a certain
> amount of commits, but id prefer solving this problem rather than
> building a workaround..
>
> Any hints/thoughts on this issue are verry much appreciated, thanks in
> advance for your help.
>
> cheers Bastian.
>
> Aug 11, 2010 4:51:58 PM org.apache.solr.core.SolrCore execute
> INFO: [] webapp=/solr path=/select
> params={fl=media_id,keyword_1004&sort=priority_1000+desc,+score+desc&i
> nd
> ent=off&start=0&q=mandant_id:1000+AND+partner_id:1000+AND+active_1000:
> tr
> ue+AND+cat_id_path_1000:7231/7258*+AND+language_id:1004&rows=24&versio
> ue+AND+n=
> 2.2} status=500 QTime=2
> Aug 11, 2010 4:51:58 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.IOException: read past EOF
>        at
> org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.j
> av
> a:151)
>        at
> org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput
> .j
> ava:38)
>        at
> org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
>        at
> org.apache.lucene.index.SegmentTermDocs.next(SegmentTermDocs.java:112)
>        at
> org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCach
> eI
> mpl.java:461)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:
> 22
> 4)
>        at
> org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
>        at
> org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCach
> eI
> mpl.java:445)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:
> 22
> 4)
>        at
> org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
>        at
> org.apache.lucene.search.FieldComparator$IntComparator.setNextReader(F
> ie
> ldComparator.java:332)
>        at
> org.apache.lucene.search.TopFieldCollector$MultiComparatorNonScoringCo
> ll
> ector.setNextReader(TopFieldCollector.java:435)
>        at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:249)
>        at org.apache.lucene.search.Searcher.search(Searcher.java:171)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.
> java:988)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher
> .j
> ava:884)
>        at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java
> :3
> 41)
>        at
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.
> java:182)
>        at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Sear
> ch
> Handler.java:195)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandle
> rB
> ase.java:131)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.
> ja
> va:338)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> .j
> ava:241)
>        at
> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter(W
> eb
> ApplicationHandler.java:821)
>        at
> org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicatio
> nH
> andler.java:471)
>        at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:568)
>        at org.mortbay.http.HttpContext.handle(HttpContext.java:1530)
>        at
> org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationC
> on
> text.java:633)
>        at org.mortbay.http.HttpContext.handle(HttpContext.java:1482)
>        at org.mortbay.http.HttpServer.service(HttpServer.java:909)
>        at
> org.mortbay.http.HttpConnection.service(HttpConnection.java:820)
>        at
> org.mortbay.http.ajp.AJP13Connection.handleNext(AJP13Connection.java:2
> 95
> )
>        at
> org.mortbay.http.HttpConnection.handle(HttpConnection.java:837)
>        at
> org.mortbay.http.ajp.AJP13Listener.handleConnection(AJP13Listener.java
> :2
> 12)
>        at
> org.mortbay.util.ThreadedServer.handle(ThreadedServer.java:357)
>        at
> org.mortbay.util.ThreadPool$PoolThread.run(ThreadPool.java:534)
>
Reply | Threaded
Open this post in threaded view
|

Re: SolrException log

Bastian S.
 I dont seem to find a decent documentation on how those  parameters actually work.

this is the default, example block:

    <deletionPolicy class="solr.SolrDeletionPolicy">
      <!-- The number of commit points to be kept -->
      <str name="maxCommitsToKeep">1</str>
      <!-- The number of optimized commit points to be kept -->
      <str name="maxOptimizedCommitsToKeep">0</str>
      <!--
          Delete all commit points once they have reached the given age.
          Supports DateMathParser syntax e.g.
         
          <str name="maxCommitAge">30MINUTES</str>
          <str name="maxCommitAge">1DAY</str>
      -->
    </deletionPolicy>

so do i have to increase the maxCommitsToKeep to a value of 2 when i add a maxCommitAge Parameter? Or will 1 still be enough? Do i have to
call optimize more than once a day when i add maxOptimizedCommitsToKeep with a value of 1?

can some1 please explain how this is supposed to work?

-----Ursprüngliche Nachricht-----
Von: Bastian Spitzer [mailto:[hidden email]]
Gesendet: Montag, 23. August 2010 16:40
An: [hidden email]
Betreff: Re: SolrException log

Hi Tommaso,

Thanks for your Reply. The Solr Files are on local disk, on a reiserfs. I'll try to set a Deletion Policy and report back if that solved the problem, thank you for the hint.

cheers,
Bastian

-----Ursprüngliche Nachricht-----
Von: Tommaso Teofili [mailto:[hidden email]]
Gesendet: Montag, 23. August 2010 15:31
An: [hidden email]
Betreff: Re: SolrException log

Hi Bastian,
this seems to be related to IO and file deletion (optimization compacts and removes index files), are you running Solr on NFS or a distributed file system?
You could set a propert IndexDeletionPolicy (SolrDeletionPolicy) in solrconfig.xml to handle this.
My 2 cents,
Tommaso

2010/8/11 Bastian Spitzer <[hidden email]>

> Hi,
>
> we are using solr 1.4.1 in a master-slave setup with replication,
> requests are loadbalanced to both instances. this is just working
> fine, but the slave behaves strange sometimes with a "SolrException
> log" (trace below). We are using 1.4.1 for weeks now, and this has
> happened only a few times so far, and it only occured on the Slave.
> The Problem seemed to be gone when we added a cron-job to send a
> periodic <optimize/> (once a day) to the master, but today it did
> happen again. The Index contains 55 files right now, after optimize
> there are only 10. So it seems its a problem when the index is spread
> among a lot files. The Slave wont ever recover once this Exception
> shows up, the only thing that helps is a restart.
>
> Is this a known issue? Only workaround would be to track the
> commit-counts and send additional <optimize/> requests after a certain
> amount of commits, but id prefer solving this problem rather than
> building a workaround..
>
> Any hints/thoughts on this issue are verry much appreciated, thanks in
> advance for your help.
>
> cheers Bastian.
>
> Aug 11, 2010 4:51:58 PM org.apache.solr.core.SolrCore execute
> INFO: [] webapp=/solr path=/select
> params={fl=media_id,keyword_1004&sort=priority_1000+desc,+score+desc&i
> nd
> ent=off&start=0&q=mandant_id:1000+AND+partner_id:1000+AND+active_1000:
> tr
> ue+AND+cat_id_path_1000:7231/7258*+AND+language_id:1004&rows=24&versio
> ue+AND+n=
> 2.2} status=500 QTime=2
> Aug 11, 2010 4:51:58 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.IOException: read past EOF
>        at
> org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.j
> av
> a:151)
>        at
> org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput
> .j
> ava:38)
>        at
> org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
>        at
> org.apache.lucene.index.SegmentTermDocs.next(SegmentTermDocs.java:112)
>        at
> org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCach
> eI
> mpl.java:461)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:
> 22
> 4)
>        at
> org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
>        at
> org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCach
> eI
> mpl.java:445)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:
> 22
> 4)
>        at
> org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
>        at
> org.apache.lucene.search.FieldComparator$IntComparator.setNextReader(F
> ie
> ldComparator.java:332)
>        at
> org.apache.lucene.search.TopFieldCollector$MultiComparatorNonScoringCo
> ll
> ector.setNextReader(TopFieldCollector.java:435)
>        at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:249)
>        at org.apache.lucene.search.Searcher.search(Searcher.java:171)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.
> java:988)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher
> .j
> ava:884)
>        at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java
> :3
> 41)
>        at
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.
> java:182)
>        at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(Sear
> ch
> Handler.java:195)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandle
> rB
> ase.java:131)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.
> ja
> va:338)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> .j
> ava:241)
>        at
> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter(W
> eb
> ApplicationHandler.java:821)
>        at
> org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicatio
> nH
> andler.java:471)
>        at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:568)
>        at org.mortbay.http.HttpContext.handle(HttpContext.java:1530)
>        at
> org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationC
> on
> text.java:633)
>        at org.mortbay.http.HttpContext.handle(HttpContext.java:1482)
>        at org.mortbay.http.HttpServer.service(HttpServer.java:909)
>        at
> org.mortbay.http.HttpConnection.service(HttpConnection.java:820)
>        at
> org.mortbay.http.ajp.AJP13Connection.handleNext(AJP13Connection.java:2
> 95
> )
>        at
> org.mortbay.http.HttpConnection.handle(HttpConnection.java:837)
>        at
> org.mortbay.http.ajp.AJP13Listener.handleConnection(AJP13Listener.java
> :2
> 12)
>        at
> org.mortbay.util.ThreadedServer.handle(ThreadedServer.java:357)
>        at
> org.mortbay.util.ThreadPool$PoolThread.run(ThreadPool.java:534)
>
Reply | Threaded
Open this post in threaded view
|

Re: SolrException log

Tommaso Teofili
Hi again Bastian,

2010/8/23 Bastian Spitzer <[hidden email]>

>  I dont seem to find a decent documentation on how those  parameters
> actually work.
>
> this is the default, example block:
>
>    <deletionPolicy class="solr.SolrDeletionPolicy">
>      <!-- The number of commit points to be kept -->
>      <str name="maxCommitsToKeep">1</str>
>      <!-- The number of optimized commit points to be kept -->
>      <str name="maxOptimizedCommitsToKeep">0</str>
>      <!--
>          Delete all commit points once they have reached the given age.
>          Supports DateMathParser syntax e.g.
>
>          <str name="maxCommitAge">30MINUTES</str>
>          <str name="maxCommitAge">1DAY</str>
>      -->
>    </deletionPolicy>
>
> so do i have to increase the maxCommitsToKeep to a value of 2 when i add a
> maxCommitAge Parameter? Or will 1 still be enough?


I would advice to arise commit points to a reasonable value considering
indexing (and commit requests) and searching frequencies. Infact keeping too
many commit points would waste disk space but having "enough" should prevent
you from your issue.
I would do some tests with small values of maxCommitsToKeep (no more than
10/20) and maxCommitAge to one of the proposed values (30MINUTES or 1DAY)
and see what happens.


> Do i have to
> call optimize more than once a day when i add maxOptimizedCommitsToKeep
> with a value of 1?


> can some1 please explain how this is supposed to work?
>

This (SolrDeletionPolicy) is an extension of the Lucene IndexDeletionPolicy
class that is supposed to handle deletions of portions of the index.
As you may see from code, when a new commit is called the number of current
commits is retrieved and only the ones that respect maxCommitAge are kept,
others are discarded.
If you have any IndexSearcher/Reader/Writer open on a (just) discarded
(portion of) commit point you will eventually encounter that issue, but,
since you are not running on a NFS-like file system, I am not sure this
could be the case; however my advice stays and doing some testing on the
maxCommitAge and maxCommitsToKeep should clarify it.
My 2 cents, have a nice day.
Tommaso



>
> -----Ursprüngliche Nachricht-----
> Von: Bastian Spitzer [mailto:[hidden email]]
> Gesendet: Montag, 23. August 2010 16:40
> An: [hidden email]
> Betreff: Re: SolrException log
>
> Hi Tommaso,
>
> Thanks for your Reply. The Solr Files are on local disk, on a reiserfs.
> I'll try to set a Deletion Policy and report back if that solved the
> problem, thank you for the hint.
>
> cheers,
> Bastian
>
> -----Ursprüngliche Nachricht-----
> Von: Tommaso Teofili [mailto:[hidden email]]
> Gesendet: Montag, 23. August 2010 15:31
> An: [hidden email]
> Betreff: Re: SolrException log
>
> Hi Bastian,
> this seems to be related to IO and file deletion (optimization compacts and
> removes index files), are you running Solr on NFS or a distributed file
> system?
> You could set a propert IndexDeletionPolicy (SolrDeletionPolicy) in
> solrconfig.xml to handle this.
> My 2 cents,
> Tommaso
>
> 2010/8/11 Bastian Spitzer <[hidden email]>
>
> > Hi,
> >
> > we are using solr 1.4.1 in a master-slave setup with replication,
> > requests are loadbalanced to both instances. this is just working
> > fine, but the slave behaves strange sometimes with a "SolrException
> > log" (trace below). We are using 1.4.1 for weeks now, and this has
> > happened only a few times so far, and it only occured on the Slave.
> > The Problem seemed to be gone when we added a cron-job to send a
> > periodic <optimize/> (once a day) to the master, but today it did
> > happen again. The Index contains 55 files right now, after optimize
> > there are only 10. So it seems its a problem when the index is spread
> > among a lot files. The Slave wont ever recover once this Exception
> > shows up, the only thing that helps is a restart.
> >
> > Is this a known issue? Only workaround would be to track the
> > commit-counts and send additional <optimize/> requests after a certain
> > amount of commits, but id prefer solving this problem rather than
> > building a workaround..
> >
> > Any hints/thoughts on this issue are verry much appreciated, thanks in
> > advance for your help.
> >
> > cheers Bastian.
> >
> > Aug 11, 2010 4:51:58 PM org.apache.solr.core.SolrCore execute
> > INFO: [] webapp=/solr path=/select
> > params={fl=media_id,keyword_1004&sort=priority_1000+desc,+score+desc&i
> > nd
> > ent=off&start=0&q=mandant_id:1000+AND+partner_id:1000+AND+active_1000:
> > tr
> > ue+AND+cat_id_path_1000:7231/7258*+AND+language_id:1004&rows=24&versio
> > ue+AND+n=
> > 2.2} status=500 QTime=2
> > Aug 11, 2010 4:51:58 PM org.apache.solr.common.SolrException log
> > SEVERE: java.io.IOException: read past EOF
> >        at
> > org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.j
> > av
> > a:151)
> >        at
> > org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput
> > .j
> > ava:38)
> >        at
> > org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
> >        at
> > org.apache.lucene.index.SegmentTermDocs.next(SegmentTermDocs.java:112)
> >        at
> > org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCach
> > eI
> > mpl.java:461)
> >        at
> > org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:
> > 22
> > 4)
> >        at
> > org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
> >        at
> > org.apache.lucene.search.FieldCacheImpl$IntCache.createValue(FieldCach
> > eI
> > mpl.java:445)
> >        at
> > org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:
> > 22
> > 4)
> >        at
> > org.apache.lucene.search.FieldCacheImpl.getInts(FieldCacheImpl.java:430)
> >        at
> > org.apache.lucene.search.FieldComparator$IntComparator.setNextReader(F
> > ie
> > ldComparator.java:332)
> >        at
> > org.apache.lucene.search.TopFieldCollector$MultiComparatorNonScoringCo
> > ll
> > ector.setNextReader(TopFieldCollector.java:435)
> >        at
> > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:249)
> >        at org.apache.lucene.search.Searcher.search(Searcher.java:171)
> >        at
> > org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.
> > java:988)
> >        at
> > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher
> > .j
> > ava:884)
> >        at
> > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java
> > :3
> > 41)
> >        at
> > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.
> > java:182)
> >        at
> > org.apache.solr.handler.component.SearchHandler.handleRequestBody(Sear
> > ch
> > Handler.java:195)
> >        at
> > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandle
> > rB
> > ase.java:131)
> >        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
> >        at
> > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.
> > ja
> > va:338)
> >        at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> > .j
> > ava:241)
> >        at
> > org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter(W
> > eb
> > ApplicationHandler.java:821)
> >        at
> > org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicatio
> > nH
> > andler.java:471)
> >        at
> > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:568)
> >        at org.mortbay.http.HttpContext.handle(HttpContext.java:1530)
> >        at
> > org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationC
> > on
> > text.java:633)
> >        at org.mortbay.http.HttpContext.handle(HttpContext.java:1482)
> >        at org.mortbay.http.HttpServer.service(HttpServer.java:909)
> >        at
> > org.mortbay.http.HttpConnection.service(HttpConnection.java:820)
> >        at
> > org.mortbay.http.ajp.AJP13Connection.handleNext(AJP13Connection.java:2
> > 95
> > )
> >        at
> > org.mortbay.http.HttpConnection.handle(HttpConnection.java:837)
> >        at
> > org.mortbay.http.ajp.AJP13Listener.handleConnection(AJP13Listener.java
> > :2
> > 12)
> >        at
> > org.mortbay.util.ThreadedServer.handle(ThreadedServer.java:357)
> >        at
> > org.mortbay.util.ThreadPool$PoolThread.run(ThreadPool.java:534)
> >
>
Reply | Threaded
Open this post in threaded view
|

AW: SolrException log

Bastian S.
Hi Tommaso, hi solr-users,

i raised both maxCommitAge and maxCommitsToKeep and tracked the occurance of
the "read past EOF Exception". i started with
2 commits and 60MINUTES, and now im at 15commits and 180MINUTES, but with no
luck, the Exception still pops up in nearly
same frequency as before (approx every 2-3 days).

Any other ideas i should give a try?

-----Ursprüngliche Nachricht-----
Von: Tommaso Teofili [mailto:[hidden email]]
Gesendet: Mittwoch, 25. August 2010 11:30
An: [hidden email]
Betreff: Re: SolrException log

Hi again Bastian,

I would advice to arise commit points to a reasonable value considering
indexing (and commit requests) and searching frequencies. Infact keeping too
many commit points would waste disk space but having "enough" should prevent
you from your issue.
I would do some tests with small values of maxCommitsToKeep (no more than
10/20) and maxCommitAge to one of the proposed values (30MINUTES or 1DAY)
and see what happens.