[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511577#comment-16511577 ]

Erick Erickson commented on LUCENE-7976:
----------------------------------------

bq. maybe change to this?:
Sure. Next patch.

bq. Anyway, the maxMergeIsRunning logic prevents picking a "max sized" merge if the total bytes being merged across all running merges is >= the max merged size, which I think is good.

Right, good to know we're talking about the same thing ;)

bq. But I don't see where TMP is doing this same thing? We do compute mergingBytes, and pass it to score but otherwise seem not to use it?

This is where I get lost. I looked at mergingBytes clear back to the first revision of this file and {{mergingBytes}} never been used in {{score(...)}}, just passed as a parameter. I think Simon took it out as part of LUCENE-8330.

All it's used for is to set {{maxMergeIsRunning}} to prevent a computed candidate from being used if a max merge is already running, just as you say. So the latest patch passes that boolean along to doFindMerges and does the same thing with it. Since the old code only seemed to care about this when doing a "natural" merge, the new code passes false from the other two cases.

How about this. If that makes no sense and we only _think_ we're talking about the same thing, maybe hop on a Google hangout or something at your convenience and see if we can reconcile it all?



> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments
> -------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-7976
>                 URL: https://issues.apache.org/jira/browse/LUCENE-7976
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Erick Erickson
>            Assignee: Erick Erickson
>            Priority: Major
>         Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on disk) handled quite easily in a single Lucene index. This is particularly true as features like docValues move data into MMapDirectory space. The current TMP algorithm allows on the order of 50% deleted documents as per a dev list conversation with Mike McCandless (and his blog here:  https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many TB) solutions like "you need to distribute your collection over more shards" become very costly. Additionally, the tempting "optimize" button exacerbates the issue since once you form, say, a 100G segment (by optimizing/forceMerging) it is not eligible for merging until 97.5G of the docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like <maxAllowedPctDeletedInBigSegments> (no, that's not serious name, suggestions welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with smaller segments to bring the resulting segment up to 5G. If no smaller segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). It would be rewritten into a single segment removing all deleted docs no matter how big it is to start. The 100G example above would be rewritten to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the default would be the same behavior we see now. As it stands now, though, there's no way to recover from an optimize/forceMerge except to re-index from scratch. We routinely see 200G-300G Lucene indexes at this point "in the wild" with 10s of  shards replicated 3 or more times. And that doesn't even include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]