ACLs and Lucene

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

ACLs and Lucene

Markus Wiederkehr
I am working on a Document Management System where every document has
an Access Control List attached to it. Obviously a search result
should only consist of documents that may be viewed by the currently
logged in user.

I can think of three strategies to accomplish this goal:

1) using Filter and FilteredQuery
2) filtering the search result
3) somehow storing the ACL elements as Lucene fields

But each approach has serious drawbacks.

The first one degrades rapidly as the number of documents increases.
Think of determining the viewability of 10,000 documents where you
need several SQL queries per document.

The second approach also degrades badly when a user has access to a
very small subset of all documents. There could be thousands of false
hits before the first viewable document is reached.

The third approach looks most promising to me but would require to
update Lucene documents whenever an ACL changes. Unfortunately it is
not possible to update Lucene documents without losing fields that are
indexed but not stored, right?

So my question is: is there another approach or a "standard solution"
I did not think of? Or how did others solve this problem?

Thanks in advance,

Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

RE: ACLs and Lucene

Max Pfingsthorn
Hi!

I've got exactly the same problem. Maybe it is possible to extend the previously discussed patch to fragment the fields of one document into separate files to actually allow updating only one fragment? Then, updating frequently changing fields (like ACLs or other meta data, maybe even a PageRank value for Nutch?), would be cheaper. This would also allow to easily 'render' ACLs on the documents they influence while changing the ACLs. After all, you don't change ACLs as often as you access documents. I guess this would be hard, as the lexicon is stored elsewhere... Any ideas?
It would of course be even better to properly separate these in different indices and be able to map document id's across them. Updating would be rather simple, and retrieval may be done in parallel. Maybe a custom RelationalMultiSearcher would be in order?

I've also thought about combining document and field based fragmentation strategies. Since we need subsecond search and update performance of a multi-million document index in the near future, this seems the way to go. Hardware would not really be an issue here, but of course we want to be efficient, especially in a multi-processor environment. Have there been any thoughts about this?

Best regards,

Max Pfingsthorn

Hippo  

Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-------------------------------------------------------------
[hidden email] / www.hippo.nl
--------------------------------------------------------------



-----Original Message-----
From: Markus Wiederkehr [mailto:[hidden email]]
Sent: Monday, May 30, 2005 09:47
To: Lucene users
Subject: ACLs and Lucene


I am working on a Document Management System where every document has
an Access Control List attached to it. Obviously a search result
should only consist of documents that may be viewed by the currently
logged in user.

I can think of three strategies to accomplish this goal:

1) using Filter and FilteredQuery
2) filtering the search result
3) somehow storing the ACL elements as Lucene fields

But each approach has serious drawbacks.

The first one degrades rapidly as the number of documents increases.
Think of determining the viewability of 10,000 documents where you
need several SQL queries per document.

The second approach also degrades badly when a user has access to a
very small subset of all documents. There could be thousands of false
hits before the first viewable document is reached.

The third approach looks most promising to me but would require to
update Lucene documents whenever an ACL changes. Unfortunately it is
not possible to update Lucene documents without losing fields that are
indexed but not stored, right?

So my question is: is there another approach or a "standard solution"
I did not think of? Or how did others solve this problem?

Thanks in advance,

Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

RE: ACLs and Lucene

Bruce Ritchie
In reply to this post by Markus Wiederkehr
Markus,

> I am working on a Document Management System where every
> document has an Access Control List attached to it. Obviously
> a search result should only consist of documents that may be
> viewed by the currently logged in user.
>
> I can think of three strategies to accomplish this goal:
>
> 1) using Filter and FilteredQuery
> 2) filtering the search result
> 3) somehow storing the ACL elements as Lucene fields
>
> But each approach has serious drawbacks.
>
> The first one degrades rapidly as the number of documents increases.
> Think of determining the viewability of 10,000 documents
> where you need several SQL queries per document.
>
> The second approach also degrades badly when a user has
> access to a very small subset of all documents. There could
> be thousands of false hits before the first viewable document
> is reached.
>
> The third approach looks most promising to me but would
> require to update Lucene documents whenever an ACL changes.
> Unfortunately it is not possible to update Lucene documents
> without losing fields that are indexed but not stored, right?
>
> So my question is: is there another approach or a "standard solution"
> I did not think of? Or how did others solve this problem?

We took a combination of the first and the second approach in our applications. We filter by content area that the user is allowed to view  and then filter the search results that are retrieved. It's actually very fast for us because we don't have to load the document to check the permissions - just query an API which caches all the permissions. SQL is only required for loading the documents that are visible for any given result page (assumming that the document isn't already loaded into cache).

The third approach was deemed unusable for the exact reason you outlined.


Regards,

Bruce Ritchie

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

RE: ACLs and Lucene

Robichaud, Jean-Philippe
In reply to this post by Markus Wiederkehr
What about:
http://svn.apache.org/viewcvs.cgi/lucene/java/trunk/src/java/org/apache/luce
ne/index/ParallelReader.java?rev=169859&view=markup

Jp
-----Original Message-----
From: Bruce Ritchie [mailto:[hidden email]]
Sent: Monday, May 30, 2005 11:26 AM
To: [hidden email]
Subject: RE: ACLs and Lucene

Markus,

> I am working on a Document Management System where every
> document has an Access Control List attached to it. Obviously
> a search result should only consist of documents that may be
> viewed by the currently logged in user.
>
> I can think of three strategies to accomplish this goal:
>
> 1) using Filter and FilteredQuery
> 2) filtering the search result
> 3) somehow storing the ACL elements as Lucene fields
>
> But each approach has serious drawbacks.
>
> The first one degrades rapidly as the number of documents increases.
> Think of determining the viewability of 10,000 documents
> where you need several SQL queries per document.
>
> The second approach also degrades badly when a user has
> access to a very small subset of all documents. There could
> be thousands of false hits before the first viewable document
> is reached.
>
> The third approach looks most promising to me but would
> require to update Lucene documents whenever an ACL changes.
> Unfortunately it is not possible to update Lucene documents
> without losing fields that are indexed but not stored, right?
>
> So my question is: is there another approach or a "standard solution"
> I did not think of? Or how did others solve this problem?

We took a combination of the first and the second approach in our
applications. We filter by content area that the user is allowed to view
and then filter the search results that are retrieved. It's actually very
fast for us because we don't have to load the document to check the
permissions - just query an API which caches all the permissions. SQL is
only required for loading the documents that are visible for any given
result page (assumming that the document isn't already loaded into cache).

The third approach was deemed unusable for the exact reason you outlined.


Regards,

Bruce Ritchie

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: ACLs and Lucene

Markus Wiederkehr
On 5/30/05, Robichaud, Jean-Philippe
<[hidden email]> wrote:
> What about:
> http://svn.apache.org/viewcvs.cgi/lucene/java/trunk/src/java/org/apache/luce
> ne/index/ParallelReader.java?rev=169859&view=markup

Thank you, this seems to be exactly what I am looking for.

One thing I don't quiet understand is how to ensure that the document
numbers in the different indexes always correspond? AFAIK the only way
to update a Lucene document is to delete and re-add it, right? And
doing so is likely to change the document number. So when I update a
document in one index but not in the other(s) the link between them
gets lost. How do I prevent this?

Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Augmenting an existing index (was: ACLs and Lucene)

Sebastian Marius Kirsch
In reply to this post by Robichaud, Jean-Philippe
Hello,

I have a similar problem, for which ParallelReader looks like a good
solution -- except for the problem of creating a set of indices with
matching document numbers.

I want to augment the documents in an existing index with information
that can be extracted from the same index. (Basically, I am indexing a
mailing list archive, and want to add keyword fields to documents that
contain the message ids of followup messages. That way, I could
quickly link to the followup messages from original
message. Unfortunately, I don't know the ids of all followup messages
until after I indexed the whole archive.)

I tried to implement a FilterIndexReader that would add the required
information, but couldn't get that to work. (I guess there's more to
extending FilterIndexReader than just overriding the document() method
and tacking a few more keyword fields on to the document before
returning it.) When I add my FilterIndexReader to a new IndexWriter
with the .addIndexes() method, it seems to work, but when I try to
optimize the new index, I get the following error:

merging segments _0 (1900 docs)Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 100203040
        at java.util.ArrayList.get(ArrayList.java:326)
        at org.apache.lucene.index.FieldInfos.fieldInfo(FieldInfos.java:155)
        at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:66)
        at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:237)
        at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:185)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:92)
        at org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:487)
        at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:366)
        at org.sebastiankirsch.thesis.util.MailFilterIndexReader.main(MailFilterIndexReader.java:210)

If I don't optimize the index, I don't get an error, but Luke cannot
read the new index properly. I guess this has something to do with me
messing with the documents without properly adjusting the index terms
etc.

At the moment, I index the whole archive twice, and use the info from
the first index to add missing fields to the second index. However, it
would save me a lot of work (and processing power, of course) if I
could just postprocess the index from the first pass without
re-indexing the messages. Furthermore, it would open up the
possibility to apply even more passes to the postprocessing. (I'm
probably going to need that soon.)

I presume that a ParallelIndexReader could be merged into a single
index using addIndexes()? So if the problem of keeping the doc numbers
in sync can be solved ...

Alternatively, I would welcome hints as to how to implement a
FilterIndexReader properly.

Thanks very much for your time, Sebastian

On Mon, May 30, 2005 at 11:32:13AM -0400, Robichaud, Jean-Philippe wrote:
> What about:
> http://svn.apache.org/viewcvs.cgi/lucene/java/trunk/src/java/org/apache/luce
> ne/index/ParallelReader.java?rev=169859&view=markup

--
Sebastian Kirsch <[hidden email]> [http://www.sebastian-kirsch.org/]

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

managing docids for ParallelReader (was Augmenting an existing index)

Matt Quail
> I have a similar problem, for which ParallelReader looks like a good
> solution -- except for the problem of creating a set of indices with
> matching document numbers.

I have wondered about this as well. Are there any *sure fire* ways of  
creating (and updating) two indices so that doc numbers in one index  
deliberately correspond to doc numbers in the other index?

=Matt

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: managing docids for ParallelReader (was Augmenting an existing index)

Doug Cutting
Matt Quail wrote:
>> I have a similar problem, for which ParallelReader looks like a good
>> solution -- except for the problem of creating a set of indices with
>> matching document numbers.
>
>
> I have wondered about this as well. Are there any *sure fire* ways of  
> creating (and updating) two indices so that doc numbers in one index  
> deliberately correspond to doc numbers in the other index?

If you add the documents in the same order to both indexes and perform
the same deletions on both indexes then they'll have the same numbers.

If this is not convenient, then you could add an id field to all
documents in the primary index.  Then create (or re-create) the
secondary index by iterating through the values in a FieldCache of this
id field.

ParallelReader was not really designed to support incremental updates of
fields, but rather to accellerate batch updates.  For incremental
updates you're probably better served by updating a single index.

One could define an "acl" IndexReader subclass that generates termDoc
lists on the fly by looking in an external database.  This would require
a mapping between Lucene document ids and external document IDs.  A
FieldCache, as described above, could serve that purpose.

Doug

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: managing docids for ParallelReader (was Augmenting an existing index)

Markus Wiederkehr
On 5/31/05, Doug Cutting <[hidden email]> wrote:
> Matt Quail wrote:
> > I have wondered about this as well. Are there any *sure fire* ways of
> > creating (and updating) two indices so that doc numbers in one index
> > deliberately correspond to doc numbers in the other index?
>
> If you add the documents in the same order to both indexes and perform
> the same deletions on both indexes then they'll have the same numbers.

The Javadoc says that ParallelReader is useful with collections that
have large fields which change rarely and small fields that change
more frequently. IMO that implies that you do *not* always apply the
same operations on both indexes.

> If this is not convenient, then you could add an id field to all
> documents in the primary index.  Then create (or re-create) the
> secondary index by iterating through the values in a FieldCache of this
> id field.

I guess I am too new to Lucene to understand how that is supposed to
work. What exactely is the purpose of a FieldCache and how is it
created and used? Could you elaborate on that, please?

> ParallelReader was not really designed to support incremental updates of
> fields, but rather to accellerate batch updates.  For incremental
> updates you're probably better served by updating a single index.

I would be happy with a single index if it were possible to change
fields of a document without affecting other fields. When I lookup a
document using an IndexSearcher, manipulate some fields and save that
instance using an IndexWriter I lose all fields that were indexed but
not stored. Recreating that fields whenever the ACL of a document
changes is too expensive and is not an option therefore.

> One could define an "acl" IndexReader subclass that generates termDoc
> lists on the fly by looking in an external database.  This would require
> a mapping between Lucene document ids and external document IDs.  A
> FieldCache, as described above, could serve that purpose.

Again, could you elaborate a little more on the FieldCache, please?

Thanks,

Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: managing docids for ParallelReader (was Augmenting an existing index)

Markus Wiederkehr
In reply to this post by Doug Cutting
On 5/31/05, Doug Cutting <[hidden email]> wrote:
> > I have wondered about this as well. Are there any *sure fire* ways of
> > creating (and updating) two indices so that doc numbers in one index
> > deliberately correspond to doc numbers in the other index?
>
> If you add the documents in the same order to both indexes and perform
> the same deletions on both indexes then they'll have the same numbers.

Would it be possible to write an IndexReader that combines two indexes
by a common field, for example a document ID? And how performant would
such an implementation be?

Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: managing docids for ParallelReader

Sebastian Marius Kirsch
In reply to this post by Doug Cutting
Hi Doug,

I took up your suggestion to use a ParallelReader for adding more
fields to existing documents. I now have two indexes with the same
number of documents, but different fields. One field is duplicated
(the id field.)

I wrote a small class to merge those two indexes into one index; it is
attached to this message. However, when I run this class in order to
merge the two indexes, I get a NullPointerException:

Exception in thread "main" java.lang.NullPointerException
        at org.apache.lucene.index.ParallelReader$ParallelTermPositions.seek(ParallelReader.java:318)
        at org.apache.lucene.index.ParallelReader$ParallelTermDocs.seek(ParallelReader.java:294)
        at org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:325)
        at org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:296)
        at org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:270)
        at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:234)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:96)
        at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:596)
        at org.sebastiankirsch.thesis.util.ParallelIndexMergeTool.main(ParallelIndexMergeTool.java:27)

I'm afraid that this is my first journey into the bowels of Lucene,
and I don't know where to look for sources of the problem. I tried
removing the duplicate field, but the symptoms stay the same. Does
this mean that I cannot merge two indexes from a ParallelReader into
one normal? Or is it a problem with my index? Or a problem somewhere
else?

I am using revision 179785 from the svn repo.

Thanks very much for your time, Sebastian


        public static void main(String[] args) throws IOException {
                IndexWriter writer = new IndexWriter(args[0], new StandardAnalyzer(), true);
                ParallelReader reader = new ParallelReader();
               
                for (int i = 1; i < args.length; i++) {
                        reader.add(IndexReader.open(args[i]));
                }
               
                writer.addIndexes(new IndexReader[] { reader });
                writer.optimize();
                writer.close();
        }

--
Sebastian Kirsch <[hidden email]> [http://www.sebastian-kirsch.org/]

NOTE: New email address! Please update your address book.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: managing docids for ParallelReader

Doug Cutting
Sebastian Marius Kirsch wrote:
> I took up your suggestion to use a ParallelReader for adding more
> fields to existing documents. I now have two indexes with the same
> number of documents, but different fields.

Does search work using the ParalleReader?

> One field is duplicated
> (the id field.)

Why is this duplicated?  Just curious.  That shouldn't cause a problem.

> I wrote a small class to merge those two indexes into one index; it is
> attached to this message. However, when I run this class in order to
> merge the two indexes, I get a NullPointerException:

Why are you merging?  Why not just search using the ParallelReader?
Again, just curious.  This should work.

> Exception in thread "main" java.lang.NullPointerException
> at org.apache.lucene.index.ParallelReader$ParallelTermPositions.seek(ParallelReader.java:318)
> at org.apache.lucene.index.ParallelReader$ParallelTermDocs.seek(ParallelReader.java:294)
> at org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:325)
> at org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:296)
> at org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:270)
> at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:234)
> at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:96)
> at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:596)
> at org.sebastiankirsch.thesis.util.ParallelIndexMergeTool.main(ParallelIndexMergeTool.java:27)

This could be a bug.  I have not tested merging with a ParallelReader.
Can you please try to adding a test case to TestParallelReader that
demonstrates this?

Thanks,

Doug

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: managing docids for ParallelReader

Sebastian Marius Kirsch
Dear Doug,

thanks for your message.

On Fri, Jun 03, 2005 at 09:37:01AM -0700, Doug Cutting wrote:
> Sebastian Marius Kirsch wrote:
> >I took up your suggestion to use a ParallelReader for adding more
> >fields to existing documents. I now have two indexes with the same
> >number of documents, but different fields.
> Does search work using the ParalleReader?

I have to admit that I didn't test *that* yet, sorry. I thought that
merging the index into another one would be a good test anyway.

> >One field is duplicated (the id field.)
> Why is this duplicated?  Just curious.  That shouldn't cause a problem.

Just as a precaution, so that I can tell afterwards whether two
indexes are in sync or not. (Iterate over the documents in both
indexes and check whether the id fields match.)

> Why are you merging?  Why not just search using the ParallelReader?
> Again, just curious.  This should work.

Interoperability. That way, I can later hand the index over to another
application that knows nothing about parallel indexes, and don't have
to make sure that this other application combines the indexes the
right way.

(Oh, and I can use Luke to check the index for plausibility. That's an
important point for me.)

> This could be a bug.  I have not tested merging with a ParallelReader.
> Can you please try to adding a test case to TestParallelReader that
> demonstrates this?

I have attached the diff for the test case, and the output of the test
run.

I have played around with the code for a couple of hours, but cannot
find a fix for this. If I change ParallelTermPosition.seek(TermEnum
termEnum) to check for termEnum.term() being null, and then hand this
null over to the correct IndexReader, instead of calling
.seek(termEnum.term()), then I get a different
error. (NullPointerException in
org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:140).)
Apparently, TermPositions are not made for seeking to a null.

On the other hand, I don't know where the null is coming from in the
first case. It comes from the termEnum of one of the underlying
IndexReaders, and if that's a problem, it should be a problem outside
of ParallelReader too.

I'm confused (in case you couldn't tell that yet.) I'll try to find
out more tomorrow.

Regards, Sebastian

PS: I'm rather new to this whole Java thing; I tried to import lucene
into Eclipse for easier debugging, but failed. If any of the
developers use Eclipse, I'd be grateful for some hints regarding
this. Thanks for bearing with me.

$ svn diff
Index: src/test/org/apache/lucene/index/TestParallelReader.java
===================================================================
--- src/test/org/apache/lucene/index/TestParallelReader.java    (revision 179785)
+++ src/test/org/apache/lucene/index/TestParallelReader.java    (working copy)
@@ -57,6 +57,13 @@
 
   }
 
+  public void testMerge() throws Exception {
+    Directory dir = new RAMDirectory();
+    IndexWriter w = new IndexWriter(dir, new StandardAnalyzer(), true);
+    w.addIndexes(new IndexReader[] { ((IndexSearcher) parallel).getIndexReader() });
+    w.close();
+  }
+  
   private void queryTest(Query query) throws IOException {
     Hits parallelHits = parallel.search(query);
     Hits singleHits = single.search(query);
$ ant -Dtestcase=TestParallelReader test
Buildfile: build.xml
[...]
test:
    [mkdir] Created dir: /Users/skirsch/text/lectures/da/thirdparty/lucene-trunk/build/test
    [junit] Testsuite: org.apache.lucene.index.TestParallelReader
    [junit] Tests run: 2, Failures: 0, Errors: 1, Time elapsed: 1.993 sec

    [junit] Testcase: testMerge(org.apache.lucene.index.TestParallelReader):   Caused an ERROR
    [junit] null
    [junit] java.lang.NullPointerException
    [junit]     at org.apache.lucene.index.ParallelReader$ParallelTermPositions.seek(ParallelReader.java:318)
    [junit]     at org.apache.lucene.index.ParallelReader$ParallelTermDocs.seek(ParallelReader.java:294)
    [junit]     at org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:325)
    [junit]     at org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:296)
    [junit]     at org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:270)
    [junit]     at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:234)
    [junit]     at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:96)
    [junit]     at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:596)
    [junit]     at org.apache.lucene.index.TestParallelReader.testMerge(TestParallelReader.java:63)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)


    [junit] Test org.apache.lucene.index.TestParallelReader FAILED

BUILD FAILED
/Users/skirsch/text/lectures/da/thirdparty/lucene-trunk/common-build.xml:188: Tests failed!

Total time: 16 seconds
$

--
Sebastian Kirsch <[hidden email]> [http://www.sebastian-kirsch.org/]

NOTE: New email address! Please update your address book.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]