Michael McCandless commented on LUCENE-140:
I've committed a fix for this one case to the trunk.
I'm leaving the issue open so folks above can try the fix and confirm
whether or not this fixes their cases.
Jed (or any other folks who have hit this above and are still
listening!), the fix is really trivial and would be easy to back
port to prior releases: just run "svn diff -r 494135:494136" from
a Lucene checkout to see them.
If you are willing/able to try this in one of the environments where
you keep hitting this issue, that would be awesome: if this is in fact
your root cause, then you would see an ArrayIndexOutOfBoundsException
at the point that the delete of a too-large docNum occurred (rather
than silent corruption and the above exception much later that you now
see); and if it's not your root cause after testing the fix, then we
would know for sure to look for another cause here.
Are you sure that you only ever do IndexReader.deleteDocuments(Term)
and not deleteDocuments(int docNum)? I still can't explain how this
error could happen without using that second method.
> docs out of order
> Key: LUCENE-140
> URL: https://issues.apache.org/jira/browse/LUCENE-140 > Project: Lucene - Java
> Issue Type: Bug
> Components: Index
> Affects Versions: unspecified
> Environment: Operating System: Linux
> Platform: PC
> Reporter: legez
> Assigned To: Lucene Developers
> Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar
> I can not find out, why (and what) it is happening all the time. I got an
> java.lang.IllegalStateException: docs out of order
> at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
> at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
> at org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
> at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
> at Optimize.main(Optimize.java:29)
> It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not find
> it neither in download nor in version list in this form). Everything seems OK. I
> can search through index, but I can not optimize it. Even worse after this
> exception every time I add new documents and close IndexWriter new segments is
> created! I think it has all documents added before, because of its size.
> My index is quite big: 500.000 docs, about 5gb of index directory.
> It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
> docs, try to optimize and receive above exception.
> My documents' structure is:
> static Document indexIt(String id_strony, Reader reader, String data_wydania,
> String id_wydania, String id_gazety, String data_wstawienia)
> Document doc = new Document();
> doc.add(Field.Keyword("id", id_strony ));
> doc.add(Field.Keyword("data_wydania", data_wydania));
> doc.add(Field.Keyword("id_wydania", id_wydania));
> doc.add(Field.Text("id_gazety", id_gazety));
> doc.add(Field.Keyword("data_wstawienia", data_wstawienia));
> doc.add(Field.Text("tresc", reader));
> return doc;