[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749180#comment-16749180 ]

Ankit Jain commented on LUCENE-8635:

bq. {quote}Technically we could make things work for existing segments since your patch doesn't change the file format.{quote}
[~jpountz] - I'm curious on how this can be done. I looked at the code and it seemed that all settings are passed to the segment writer and writer should put those settings in codec for reader to consume. Do you have any pointers on this?

{quote}I agree it's a bit unlikely that the terms index gets paged out, but you can still end up with a cold FS cache eg. when the host restarts?{quote}
There can be option for preloading terms index during index open. Even though, lucene already provides option for preloading mapped buffer [here|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L95], it is done at directory level and not file level. Though, elasticsearch worked around that to provide [file level setting|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

{quote}For the record, Lucene also performs implicit PK lookups when indexing with updateDocument. So this might have an impact on indexing speed as well.{quote}
If customer workload is updateDocument heavy, the impact should be minimal, as terms index will get loaded into memory after first fault for every page and then there should not be any page faults. If customers are sensitive to latency, they can use the preload option for terms index.

{quote}Wondering whether avoiding 'array reversal' in the second patch is what helped rather than moving to random access and removing skip? May be we should try with reading one byte at a time with original patch.{quote}
I overlooked that earlier and attributed performance gain to absence of seek operation. This makes lot more sense, will try to do some by changing readBytes to below:
    public byte readByte() throws IOException {
        final byte b = this.in.readByte();
        return b;

    public void readBytes(byte[] b, int offset, int len) throws IOException {
        for (int i=offset+len-1; i>=offset; i--) {
            b[i] = this.readByte();

bq. {quote}I uploaded a patch that combines these three things: off-heap FST + random-access reader + reversal of the FST so it is forward-read. Unit tests are passing; I'm running some benchmarks to see what the impact is on performance{quote}
That's great Mike. If this works, we don't need the reverse reader. We don't even need the random-access reader, as we can simply change readBytes to below:
    public void readBytes(byte[] b, int offset, int len) throws IOException {
        this.in.readBytes(b, offset, len);

> Lazy loading Lucene FST offheap using mmap
> ------------------------------------------
>                 Key: LUCENE-8635
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8635
>             Project: Lucene - Core
>          Issue Type: New Feature
>          Components: core/FSTs
>         Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>            Reporter: Ankit Jain
>            Priority: Major
>         Attachments: fst-offheap-ra-rev.patch, offheap.patch, ra.patch, rally_benchmark.xlsx
> Currently, FST loads all the terms into heap memory during index open. This causes frequent JVM OOM issues if the term size gets big. A better way of doing this will be to lazily load FST using mmap. That ensures only the required terms get loaded into memory.
> Lucene can expose API for providing list of fields to load terms offheap. I'm planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based on fstOffHeap field
> I created a patch (that loads all fields offheap), did some benchmarks using es_rally and results look good.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]