[jira] [Updated] (LUCENE-8010) fix or sandbox similarities in core with problems

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (LUCENE-8010) fix or sandbox similarities in core with problems

JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/LUCENE-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Adrien Grand updated LUCENE-8010:
---------------------------------
    Attachment: LUCENE-8010.patch

I could get all similarities to pass current tests with some tweaks:
 - Axiomatic similarities add 1 to the freq, is it ok? Otherwise we'll need to take freq = max(freq, 1) but this means sloppy phrase queries will produce the same score on all documents whose sloppy freq is less than 1
 - AxiomaticF3* Similarities have their score truncated to 0 when the gamma component would cause scores to be less than 0. This means they could produce low-quality scores but I don't have ideas how to fix it otherwise.
 - Lambda impls use a nextUp/nextDown to make sure they never produce lambda=1, which doesn't work with DistributionSPL
 - DistributionSPL also makes use of some calls to nextUp/nextDown to avoid producing infinite/NaN scores while still guaranteeing that scores do not decrease when tfn increases

> fix or sandbox similarities in core with problems
> -------------------------------------------------
>
>                 Key: LUCENE-8010
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8010
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Robert Muir
>         Attachments: LUCENE-8010.patch
>
>
> We want to support scoring optimizations such as LUCENE-4100 and LUCENE-7993, which put very minimal requirements on the similarity impl. Today similarities of various quality are in core and tests.
> The ones with problems currently have warnings in the javadocs about their bugs, and if the problems are severe enough, then they are also disabled in randomized testing too.
> IMO lucene core should only have practical functions that won't return {{NaN}} scores at times or cause relevance to go backwards if the user's stopfilter isn't configured perfectly. Also it is important for unit tests to not deal with broken or semi-broken sims, and the ones in core should pass all unit tests.
> I propose we move the buggy ones to sandbox and deprecate them. If they can be fixed we can put them back in core, otherwise bye-bye.
> FWIW tests developed in LUCENE-7997 document the following requirements:
>    * scores are non-negative and finite.
>    * score matches the explanation exactly.
>    * internal explanations calculations are sane (e.g. sum of: and so on actually compute sums)
>    * scores don't decrease as term frequencies increase: e.g. score(freq=N + 1) >= score(freq=N)
>    * scores don't decrease as documents get shorter, e.g. score(len=M) >= score(len=M+1)
>    * scores don't decrease as terms get rarer, e.g. score(term=N) >= score(term=N+1)
>    * scoring works for floating point frequencies (e.g. sloppy phrase and span queries will work)
>    * scoring works for reasonably large 64-bit statistic values (e.g. distributed search will work)
>    * scoring works for reasonably large boost values (0 .. Integer.MAX_VALUE, e.g. query boosts will work)
>    * scoring works for parameters randomized within valid ranges



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]