Jed Wesley-Smith commented on LUCENE-140:
Thanks for the patch, applied and recreated. Attached is the log.
To be explicit, we are recreating the index via the IndexWriter ctor with the create flag set and then completely rebuilding the index. We are not completely deleting the entire directory. There ARE old index files (_*.cfs & _*.del) in the directory with updated timestamps that are months old. If I completely recreate the directory the problem does go away. This is a fairly trivial "fix", but we are still investigating as we want to know if this is indeed the problem, how we have come to make it prevalent, and what the root cause is.
Thanks for all the help everyone.
> docs out of order
> Key: LUCENE-140
> URL: https://issues.apache.org/jira/browse/LUCENE-140 > Project: Lucene - Java
> Issue Type: Bug
> Components: Index
> Affects Versions: unspecified
> Environment: Operating System: Linux
> Platform: PC
> Reporter: legez
> Assigned To: Michael McCandless
> Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch
> I can not find out, why (and what) it is happening all the time. I got an
> java.lang.IllegalStateException: docs out of order
> at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
> at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
> at org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
> at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
> at Optimize.main(Optimize.java:29)
> It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not find
> it neither in download nor in version list in this form). Everything seems OK. I
> can search through index, but I can not optimize it. Even worse after this
> exception every time I add new documents and close IndexWriter new segments is
> created! I think it has all documents added before, because of its size.
> My index is quite big: 500.000 docs, about 5gb of index directory.
> It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
> docs, try to optimize and receive above exception.
> My documents' structure is:
> static Document indexIt(String id_strony, Reader reader, String data_wydania,
> String id_wydania, String id_gazety, String data_wstawienia)
> Document doc = new Document();
> doc.add(Field.Keyword("id", id_strony ));
> doc.add(Field.Keyword("data_wydania", data_wydania));
> doc.add(Field.Keyword("id_wydania", id_wydania));
> doc.add(Field.Text("id_gazety", id_gazety));
> doc.add(Field.Keyword("data_wstawienia", data_wstawienia));
> doc.add(Field.Text("tresc", reader));
> return doc;