Commit preformance problem

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Commit preformance problem

Anders Arpteg
I have a large solr index that is currently about 6 GB and is suffering of
severe performance problems during updates. A commit can take over 10
minutes to complete. I have tried to increase max memory to the JVM to over
6 GB, but without any improvement. I have also tried to turn off
waitSearcher and waitFlush, which do significantly improve the commit speed.
However, the max number of searchers is then quickly reached.

 

Would a switch to another container (currently using Jetty) make any
difference? Does anyone have any other tip for improving the performance?

 

TIA,

Anders

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Commit preformance problem

kkrugler
>I have a large solr index that is currently about 6 GB and is suffering of
>severe performance problems during updates. A commit can take over 10
>minutes to complete. I have tried to increase max memory to the JVM to over
>6 GB, but without any improvement. I have also tried to turn off
>waitSearcher and waitFlush, which do significantly improve the commit speed.
>However, the max number of searchers is then quickly reached.

If you have a large index, then I'd recommend having a separate Solr
installation that you use to update/commit changes, after which you
use snappuller or equivalent to swap it in to the live (search)
system.

>Would a switch to another container (currently using Jetty) make any
>difference?

Very unlikely.

>Does anyone have any other tip for improving the performance?

Switch to Lucene 2.3, and tune the new parameters that control memory
usage during updating.

-- Ken
--
Ken Krugler
Krugle, Inc.
+1 530-210-6378
"If you can't find it, you can't fix it"
Reply | Threaded
Open this post in threaded view
|

RE: Commit preformance problem

Jae Joo-2
I have same experience.. I do have 6.5G Index and update it daily.
Have you ever check that the updated file does not have any document and
tried "commit"? I don't know why, but it takes so long - more than 10
minutes.

Jae Joo

-----Original Message-----
From: Ken Krugler [mailto:[hidden email]]
Sent: Tuesday, February 12, 2008 10:34 AM
To: [hidden email]
Subject: Re: Commit preformance problem

>I have a large solr index that is currently about 6 GB and is suffering
of
>severe performance problems during updates. A commit can take over 10
>minutes to complete. I have tried to increase max memory to the JVM to
over
>6 GB, but without any improvement. I have also tried to turn off
>waitSearcher and waitFlush, which do significantly improve the commit
speed.
>However, the max number of searchers is then quickly reached.

If you have a large index, then I'd recommend having a separate Solr
installation that you use to update/commit changes, after which you
use snappuller or equivalent to swap it in to the live (search)
system.

>Would a switch to another container (currently using Jetty) make any
>difference?

Very unlikely.

>Does anyone have any other tip for improving the performance?

Switch to Lucene 2.3, and tune the new parameters that control memory
usage during updating.

-- Ken
--
Ken Krugler
Krugle, Inc.
+1 530-210-6378
"If you can't find it, you can't fix it"
Reply | Threaded
Open this post in threaded view
|

RE: Commit preformance problem

Jae Joo-2
Or, if you have multiple files to be updated, please make sure "Index
Multiple Files" and commit "Once" at the end of Indexing..

Jae

-----Original Message-----
From: Jae Joo [mailto:[hidden email]]
Sent: Tuesday, February 12, 2008 10:50 AM
To: [hidden email]
Subject: RE: Commit preformance problem

I have same experience.. I do have 6.5G Index and update it daily.
Have you ever check that the updated file does not have any document and
tried "commit"? I don't know why, but it takes so long - more than 10
minutes.

Jae Joo

-----Original Message-----
From: Ken Krugler [mailto:[hidden email]]
Sent: Tuesday, February 12, 2008 10:34 AM
To: [hidden email]
Subject: Re: Commit preformance problem

>I have a large solr index that is currently about 6 GB and is suffering
of
>severe performance problems during updates. A commit can take over 10
>minutes to complete. I have tried to increase max memory to the JVM to
over
>6 GB, but without any improvement. I have also tried to turn off
>waitSearcher and waitFlush, which do significantly improve the commit
speed.
>However, the max number of searchers is then quickly reached.

If you have a large index, then I'd recommend having a separate Solr
installation that you use to update/commit changes, after which you
use snappuller or equivalent to swap it in to the live (search)
system.

>Would a switch to another container (currently using Jetty) make any
>difference?

Very unlikely.

>Does anyone have any other tip for improving the performance?

Switch to Lucene 2.3, and tune the new parameters that control memory
usage during updating.

-- Ken
--
Ken Krugler
Krugle, Inc.
+1 530-210-6378
"If you can't find it, you can't fix it"
Reply | Threaded
Open this post in threaded view
|

RE: Commit preformance problem

justin alexander
a script for posting large sets (23GB here)
 
post3.sh