contrib/benchmark Quality

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

contrib/benchmark Quality

Grant Ingersoll-2
Has anyone thought about integrating the contrib/benchmark Quality  
stuff into the "algorithm" framework that's used for timings, etc.?  
For instance, I would like to write an algorithm file where my rounds  
consist of doing various runs with different similarities all on the  
same index.

It would probably need a new Task for setting the similarity (and the  
ability to modify the index using the setNorms functionality).  Anyone  
else (Doron :-)  ) have any thoughts on going about this?

-Grant

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: contrib/benchmark Quality

Doron Cohen-2
Hi Grant, I initially thought of doing so, but after working on the Million
Queries Track where running the 10,000 queries could take more than a day
(depending on the settings) and where indexing was done once and took few
days I felt that a more tight control is needed than that provided by the
benchmark layer. Maybe I should rephrase this as - I thought that the time
it would take to stabilize this functionality is not worth to invest because
the run itself can take so long and I wouldn't like to have to repeat all
this just because of a mistake in the benchmark settings. But then the same
may be a reason to have a framework that protects you from errors... :-)
I'll take a second look at this!

For setting the similarities and what have you the SetProp task can be used
to set the class names and then your similarity of choice can be loaded
byName - will restrict to a no args constructor, but this is not too bad...?
We need a new QualityRunTask for sure, but this is  quite straightforward.

Cheers,
Doron


On Thu, Jan 31, 2008 at 12:31 AM, Grant Ingersoll <[hidden email]>
wrote:

> Has anyone thought about integrating the contrib/benchmark Quality
> stuff into the "algorithm" framework that's used for timings, etc.?
> For instance, I would like to write an algorithm file where my rounds
> consist of doing various runs with different similarities all on the
> same index.
>
> It would probably need a new Task for setting the similarity (and the
> ability to modify the index using the setNorms functionality).  Anyone
> else (Doron :-)  ) have any thoughts on going about this?
>
> -Grant
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|

Re: contrib/benchmark Quality

Grant Ingersoll-2
Yes, I was thinking of smaller collections/query logs.  The Million  
Queries Track is certainly interesting.

It would also be good to be able to spit out reports as files.  Sigh.  
Need about 5 more hours in the day and the energy to access them.

-Grant

On Jan 31, 2008, at 1:02 AM, Doron Cohen wrote:

> Hi Grant, I initially thought of doing so, but after working on the  
> Million
> Queries Track where running the 10,000 queries could take more than  
> a day
> (depending on the settings) and where indexing was done once and  
> took few
> days I felt that a more tight control is needed than that provided  
> by the
> benchmark layer. Maybe I should rephrase this as - I thought that  
> the time
> it would take to stabilize this functionality is not worth to invest  
> because
> the run itself can take so long and I wouldn't like to have to  
> repeat all
> this just because of a mistake in the benchmark settings. But then  
> the same
> may be a reason to have a framework that protects you from  
> errors... :-)
> I'll take a second look at this!
>
> For setting the similarities and what have you the SetProp task can  
> be used
> to set the class names and then your similarity of choice can be  
> loaded
> byName - will restrict to a no args constructor, but this is not too  
> bad...?
> We need a new QualityRunTask for sure, but this is  quite  
> straightforward.
>
> Cheers,
> Doron
>
>
> On Thu, Jan 31, 2008 at 12:31 AM, Grant Ingersoll  
> <[hidden email]>
> wrote:
>
>> Has anyone thought about integrating the contrib/benchmark Quality
>> stuff into the "algorithm" framework that's used for timings, etc.?
>> For instance, I would like to write an algorithm file where my rounds
>> consist of doing various runs with different similarities all on the
>> same index.
>>
>> It would probably need a new Task for setting the similarity (and the
>> ability to modify the index using the setNorms functionality).  
>> Anyone
>> else (Doron :-)  ) have any thoughts on going about this?
>>
>> -Grant
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>

--------------------------
Grant Ingersoll
http://lucene.grantingersoll.com
http://www.lucenebootcamp.com

Lucene Helpful Hints:
http://wiki.apache.org/lucene-java/BasicsOfPerformance
http://wiki.apache.org/lucene-java/LuceneFAQ





---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]