Oh JMH is a good idea, I'll try that, thanks
On Wed, Dec 30, 2020 at 10:31 AM Robert Muir <
[hidden email]> wrote:
>
> Can you boil this down to a microbenchmark (e.g. JMH) so you can look
> at assembly?
>
> Maybe with on-heap, the dot product is getting vectorized, but with
> off-heap/unsafe it is not.
> e.g. something like this recent bug:
>
https://bugs.openjdk.java.net/browse/JDK-8257531>
> You could re-run your bench on a JDK-16 early access as well, with
> that fix, and see what happens.
>
> On Wed, Dec 30, 2020 at 9:00 AM Michael Sokolov <
[hidden email]> wrote:
> >
> > Hi, I've been working on improving performance of vector KNN search,
> > and found some behavior that seemed surprising to me, showing huge
> > differences in some cases comparing on-heap memory access with the way
> > we access data today via IndexInput. I'd love to get some other eyes
> > on this to help me better understand the difference, and whether it's
> > significant in reality or just an oddity that only turns up in a
> > microbenchmark. Thanks for taking a look (note: I don't plan to commit
> > the attached PR, it's just posted to show how the measurements were
> > done)
> >
> >
https://github.com/apache/lucene-solr/pull/2173> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail:
[hidden email]
> > For additional commands, e-mail:
[hidden email]
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
[hidden email]
> For additional commands, e-mail:
[hidden email]
>
---------------------------------------------------------------------
To unsubscribe, e-mail:
[hidden email]
For additional commands, e-mail:
[hidden email]