xms/xmx choices

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

xms/xmx choices

David Hastings
Hey all, over time ive adjusted and changed the solr Xms/Xmx various times
with not too much thought aside from more is better, but ive noticed in
many of the emails the recommended values are much lower than the numbers
ive historically put in.  i never really bothered to change them as the
performance was always more than acceptable.  Until now as well just got a
memory upgrade on our solr nodes so figure may as well do it right.

so im sitting at around
580 gb core
150gb core
270gb core
300gb core
depending on merges etc.  with around 50k-100k searches a day depending on
the time of year/school calendar
the three live nodes each have 4tb of decent SSD's that hold the indexes,
and now just went from 148gb to 288gb of memory.
as of now we do an xms of 8gb and xmx of 60gb, generally through the
dashboard the JVM hangs around 16gb.  I know Xms and Xmx are supposed to be
the same so thats the change #1 on my end, I am just concerned of dropping
it from 60 as thus far over the last few years I have had no problems nor
performance issues.  I know its said a lot of times to make it lower and
let the OS use the ram for caching the file system/index files, so my first
experiment was going to be around 20gb, was wondering if this seems sound,
or should i go even lower?

Thanks, always good learning with this email group.
-Dave
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

Shawn Heisey-2
On 12/5/2019 11:58 AM, David Hastings wrote:
> as of now we do an xms of 8gb and xmx of 60gb, generally through the
> dashboard the JVM hangs around 16gb.  I know Xms and Xmx are supposed to be
> the same so thats the change #1 on my end, I am just concerned of dropping
> it from 60 as thus far over the last few years I have had no problems nor
> performance issues.  I know its said a lot of times to make it lower and
> let the OS use the ram for caching the file system/index files, so my first
> experiment was going to be around 20gb, was wondering if this seems sound,
> or should i go even lower?

The Xms and Xmx settings should be the same so Java doesn't need to take
special action to increase the pool size when more than the minimum is
required.  Java tends to always increase to the maximum as it runs, so
there's usually little benefit to specifying a lower minimum than the
maximum.  With a 60GB max heap, Java is likely to grab a little more
than 60GB from the OS, regardless of how much heap is actually in use.

If you can provide GC logs from Solr that cover a signficant timeframe,
especially heavy indexing, we can analyze those and make an estimate
about the values you should have for Xms and Xmx.  It will only be a
guess ... something might happen later that requires more heap.

We can't make recommendations without hard data.  The information you
provided isn't enough to guess how much heap you'll need.  Depending on
how such a system is used, a few GB might be enough, or you might need a
lot more.

https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

Thanks,
Shawn
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

David Hastings
I know theres no hard answer, and I know the Xms and Xmx should be the
same, but it was a set it and forget it sort of thing from years ago.  I
will definitely be changing it but figured I may as well figure out as
much as possible from this user group resource.
as far as the raw GC data goes:
https://pastebin.com/vBtpYR1W

(i dont know if people still use pastebin)  i can get more if needed.  the
systems dont do ANY indexing at all, they are search only slaves.  they
share resources only with a DB install, and one node will never do both
live search and live DB.  If theres any more info youd like I would be
happy to provide, this is interesting.

On Thu, Dec 5, 2019 at 2:41 PM Shawn Heisey <[hidden email]> wrote:

> On 12/5/2019 11:58 AM, David Hastings wrote:
> > as of now we do an xms of 8gb and xmx of 60gb, generally through the
> > dashboard the JVM hangs around 16gb.  I know Xms and Xmx are supposed to
> be
> > the same so thats the change #1 on my end, I am just concerned of
> dropping
> > it from 60 as thus far over the last few years I have had no problems nor
> > performance issues.  I know its said a lot of times to make it lower and
> > let the OS use the ram for caching the file system/index files, so my
> first
> > experiment was going to be around 20gb, was wondering if this seems
> sound,
> > or should i go even lower?
>
> The Xms and Xmx settings should be the same so Java doesn't need to take
> special action to increase the pool size when more than the minimum is
> required.  Java tends to always increase to the maximum as it runs, so
> there's usually little benefit to specifying a lower minimum than the
> maximum.  With a 60GB max heap, Java is likely to grab a little more
> than 60GB from the OS, regardless of how much heap is actually in use.
>
> If you can provide GC logs from Solr that cover a signficant timeframe,
> especially heavy indexing, we can analyze those and make an estimate
> about the values you should have for Xms and Xmx.  It will only be a
> guess ... something might happen later that requires more heap.
>
> We can't make recommendations without hard data.  The information you
> provided isn't enough to guess how much heap you'll need.  Depending on
> how such a system is used, a few GB might be enough, or you might need a
> lot more.
>
>
> https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
>
> Thanks,
> Shawn
>
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

David Hastings
That probably isnt enough data, so if youre interested:

https://gofile.io/?c=rZQ2y4

On Thu, Dec 5, 2019 at 2:52 PM David Hastings <[hidden email]>
wrote:

> I know theres no hard answer, and I know the Xms and Xmx should be the
> same, but it was a set it and forget it sort of thing from years ago.  I
> will definitely be changing it but figured I may as well figure out as
> much as possible from this user group resource.
> as far as the raw GC data goes:
> https://pastebin.com/vBtpYR1W
>
> (i dont know if people still use pastebin)  i can get more if needed.  the
> systems dont do ANY indexing at all, they are search only slaves.  they
> share resources only with a DB install, and one node will never do both
> live search and live DB.  If theres any more info youd like I would be
> happy to provide, this is interesting.
>
> On Thu, Dec 5, 2019 at 2:41 PM Shawn Heisey <[hidden email]> wrote:
>
>> On 12/5/2019 11:58 AM, David Hastings wrote:
>> > as of now we do an xms of 8gb and xmx of 60gb, generally through the
>> > dashboard the JVM hangs around 16gb.  I know Xms and Xmx are supposed
>> to be
>> > the same so thats the change #1 on my end, I am just concerned of
>> dropping
>> > it from 60 as thus far over the last few years I have had no problems
>> nor
>> > performance issues.  I know its said a lot of times to make it lower and
>> > let the OS use the ram for caching the file system/index files, so my
>> first
>> > experiment was going to be around 20gb, was wondering if this seems
>> sound,
>> > or should i go even lower?
>>
>> The Xms and Xmx settings should be the same so Java doesn't need to take
>> special action to increase the pool size when more than the minimum is
>> required.  Java tends to always increase to the maximum as it runs, so
>> there's usually little benefit to specifying a lower minimum than the
>> maximum.  With a 60GB max heap, Java is likely to grab a little more
>> than 60GB from the OS, regardless of how much heap is actually in use.
>>
>> If you can provide GC logs from Solr that cover a signficant timeframe,
>> especially heavy indexing, we can analyze those and make an estimate
>> about the values you should have for Xms and Xmx.  It will only be a
>> guess ... something might happen later that requires more heap.
>>
>> We can't make recommendations without hard data.  The information you
>> provided isn't enough to guess how much heap you'll need.  Depending on
>> how such a system is used, a few GB might be enough, or you might need a
>> lot more.
>>
>>
>> https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
>>
>> Thanks,
>> Shawn
>>
>
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

David Hastings
In reply to this post by Shawn Heisey-2
and if this may be of use:
https://imgur.com/a/qXBuSxG

just been more or less winging the options since solr 1.3


On Thu, Dec 5, 2019 at 2:41 PM Shawn Heisey <[hidden email]> wrote:

> On 12/5/2019 11:58 AM, David Hastings wrote:
> > as of now we do an xms of 8gb and xmx of 60gb, generally through the
> > dashboard the JVM hangs around 16gb.  I know Xms and Xmx are supposed to
> be
> > the same so thats the change #1 on my end, I am just concerned of
> dropping
> > it from 60 as thus far over the last few years I have had no problems nor
> > performance issues.  I know its said a lot of times to make it lower and
> > let the OS use the ram for caching the file system/index files, so my
> first
> > experiment was going to be around 20gb, was wondering if this seems
> sound,
> > or should i go even lower?
>
> The Xms and Xmx settings should be the same so Java doesn't need to take
> special action to increase the pool size when more than the minimum is
> required.  Java tends to always increase to the maximum as it runs, so
> there's usually little benefit to specifying a lower minimum than the
> maximum.  With a 60GB max heap, Java is likely to grab a little more
> than 60GB from the OS, regardless of how much heap is actually in use.
>
> If you can provide GC logs from Solr that cover a signficant timeframe,
> especially heavy indexing, we can analyze those and make an estimate
> about the values you should have for Xms and Xmx.  It will only be a
> guess ... something might happen later that requires more heap.
>
> We can't make recommendations without hard data.  The information you
> provided isn't enough to guess how much heap you'll need.  Depending on
> how such a system is used, a few GB might be enough, or you might need a
> lot more.
>
>
> https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
>
> Thanks,
> Shawn
>
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

Paras Lehana
Hi David,

Your Xmx seems to be an overkill though without usage stats, this cannot be
factified. I think you should analyze long GC pauses given that you have so
much difference between the min and max. I prefer making the min/max same
before stressing on the values. You can start with 20G but what would you
do with the remaining memory?

PS: Your configuration is something I admire. :P

On Fri, 6 Dec 2019 at 01:56, David Hastings <[hidden email]>
wrote:

> and if this may be of use:
> https://imgur.com/a/qXBuSxG
>
> just been more or less winging the options since solr 1.3
>
>
> On Thu, Dec 5, 2019 at 2:41 PM Shawn Heisey <[hidden email]> wrote:
>
> > On 12/5/2019 11:58 AM, David Hastings wrote:
> > > as of now we do an xms of 8gb and xmx of 60gb, generally through the
> > > dashboard the JVM hangs around 16gb.  I know Xms and Xmx are supposed
> to
> > be
> > > the same so thats the change #1 on my end, I am just concerned of
> > dropping
> > > it from 60 as thus far over the last few years I have had no problems
> nor
> > > performance issues.  I know its said a lot of times to make it lower
> and
> > > let the OS use the ram for caching the file system/index files, so my
> > first
> > > experiment was going to be around 20gb, was wondering if this seems
> > sound,
> > > or should i go even lower?
> >
> > The Xms and Xmx settings should be the same so Java doesn't need to take
> > special action to increase the pool size when more than the minimum is
> > required.  Java tends to always increase to the maximum as it runs, so
> > there's usually little benefit to specifying a lower minimum than the
> > maximum.  With a 60GB max heap, Java is likely to grab a little more
> > than 60GB from the OS, regardless of how much heap is actually in use.
> >
> > If you can provide GC logs from Solr that cover a signficant timeframe,
> > especially heavy indexing, we can analyze those and make an estimate
> > about the values you should have for Xms and Xmx.  It will only be a
> > guess ... something might happen later that requires more heap.
> >
> > We can't make recommendations without hard data.  The information you
> > provided isn't enough to guess how much heap you'll need.  Depending on
> > how such a system is used, a few GB might be enough, or you might need a
> > lot more.
> >
> >
> >
> https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
> >
> > Thanks,
> > Shawn
> >
>


--
--
Regards,

*Paras Lehana* [65871]
Development Engineer, Auto-Suggest,
IndiaMART Intermesh Ltd.

8th Floor, Tower A, Advant-Navis Business Park, Sector 142,
Noida, UP, IN - 201303

Mob.: +91-9560911996
Work: 01203916600 | Extn:  *8173*

--
*
*

 <https://www.facebook.com/IndiaMART/videos/578196442936091/>
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

Shawn Heisey-2
In reply to this post by David Hastings
On 12/5/2019 12:57 PM, David Hastings wrote:
> That probably isnt enough data, so if youre interested:
>
> https://gofile.io/?c=rZQ2y4

The previous one was less than 4 minutes, so it doesn't reveal anything
useful.

This one is a little bit less than two hours.  That's more useful, but
still pretty short.

Here's the "heap after GC" graph from the larger file:

https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0

At around 14:15, the heap usage was rather high. It got up over 25GB.
There were some very long GCs right at that time, which probably means
they were full GCs.  And they didn't free up any significant amount of
memory.  So I'm betting that sometimes you actually *do* need a big
chunk of that 60GB of heap.  You might try reducing it to 31g instead of
60000m.  Java's memory usage is a lot more efficient if the max heap
size is less than 32 GB.

I can't give you any information about what happened at that time which
required so much heap.  You could see if you have logfiles that cover
that timeframe.

Thanks,
Shawn
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

David Hastings
Actually at about that time the replication finished and added about 20-30gb to the index from the master.  My current set up goes
Indexing master -> indexer slave/production master (only replicated on command)-> three search slaves (replicate each 15 minutes)

We added about 2.3m docs, then I replicated it to the production master and since there was a change it replicated out to the slave node the gc came from

I’ll set one of the slaves to 31/31 and force all load to that one and see how she does. Thanks!


> On Dec 6, 2019, at 1:02 AM, Shawn Heisey <[hidden email]> wrote:
>
> On 12/5/2019 12:57 PM, David Hastings wrote:
>> That probably isnt enough data, so if youre interested:
>> https://gofile.io/?c=rZQ2y4
>
> The previous one was less than 4 minutes, so it doesn't reveal anything useful.
>
> This one is a little bit less than two hours.  That's more useful, but still pretty short.
>
> Here's the "heap after GC" graph from the larger file:
>
> https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0
>
> At around 14:15, the heap usage was rather high. It got up over 25GB. There were some very long GCs right at that time, which probably means they were full GCs.  And they didn't free up any significant amount of memory.  So I'm betting that sometimes you actually *do* need a big chunk of that 60GB of heap.  You might try reducing it to 31g instead of 60000m.  Java's memory usage is a lot more efficient if the max heap size is less than 32 GB.
>
> I can't give you any information about what happened at that time which required so much heap.  You could see if you have logfiles that cover that timeframe.
>
> Thanks,
> Shawn
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

Erick Erickson
A replication shouldn’t have consumed that much heap. It’s mostly I/O, just a write through. If replication really consumes huge amounts of heap we need to look at that more closely. Personally I suspect/hope it’s coincidental, but that’s only a guess. You can attach jconsole to the running process and monitor heap usage in real-time, jconsole is part of the JDK so should be relatively easy to install. It has a nifty “gc now” button that you can use to see if the heap you’re accumulating is just garbage or really accumulates…

And if this really is related to replication and that much heap is actually used, we need to figure out why. Shawn’s observation that there is very little heap recovered is worrying.

Best,
Erick

> On Dec 6, 2019, at 7:37 AM, Dave <[hidden email]> wrote:
>
> Actually at about that time the replication finished and added about 20-30gb to the index from the master.  My current set up goes
> Indexing master -> indexer slave/production master (only replicated on command)-> three search slaves (replicate each 15 minutes)
>
> We added about 2.3m docs, then I replicated it to the production master and since there was a change it replicated out to the slave node the gc came from
>
> I’ll set one of the slaves to 31/31 and force all load to that one and see how she does. Thanks!
>
>
>> On Dec 6, 2019, at 1:02 AM, Shawn Heisey <[hidden email]> wrote:
>>
>> On 12/5/2019 12:57 PM, David Hastings wrote:
>>> That probably isnt enough data, so if youre interested:
>>> https://gofile.io/?c=rZQ2y4
>>
>> The previous one was less than 4 minutes, so it doesn't reveal anything useful.
>>
>> This one is a little bit less than two hours.  That's more useful, but still pretty short.
>>
>> Here's the "heap after GC" graph from the larger file:
>>
>> https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0
>>
>> At around 14:15, the heap usage was rather high. It got up over 25GB. There were some very long GCs right at that time, which probably means they were full GCs.  And they didn't free up any significant amount of memory.  So I'm betting that sometimes you actually *do* need a big chunk of that 60GB of heap.  You might try reducing it to 31g instead of 60000m.  Java's memory usage is a lot more efficient if the max heap size is less than 32 GB.
>>
>> I can't give you any information about what happened at that time which required so much heap.  You could see if you have logfiles that cover that timeframe.
>>
>> Thanks,
>> Shawn

Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

David Hastings
Thanks you guys, this has been educational, i uploaded up to now, the
server was restarted after adding the extra memory, so
https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB

is what im looking at.  tuning the JVM is new to me, so im just going by
what ive researched and what this site is saying.
from what i can tell:
  the peak looks like 31gb would be perfect, will implement that today
  throughput is seems good, assuming gceasy recommendation of above 95% is
the target and im at 99.6
  latency looks like its as good as I really care to get, who really cares
about 200ms
  as far as heap after a GC it looks like it recovered well? or am i
missing something?  the red spikes of a full GC like 28gb, and right after
its down to 14gb

I really appreciate this input, its educational/helpful
-Dave





On Fri, Dec 6, 2019 at 7:48 AM Erick Erickson <[hidden email]>
wrote:

> A replication shouldn’t have consumed that much heap. It’s mostly I/O,
> just a write through. If replication really consumes huge amounts of heap
> we need to look at that more closely. Personally I suspect/hope it’s
> coincidental, but that’s only a guess. You can attach jconsole to the
> running process and monitor heap usage in real-time, jconsole is part of
> the JDK so should be relatively easy to install. It has a nifty “gc now”
> button that you can use to see if the heap you’re accumulating is just
> garbage or really accumulates…
>
> And if this really is related to replication and that much heap is
> actually used, we need to figure out why. Shawn’s observation that there is
> very little heap recovered is worrying.
>
> Best,
> Erick
>
> > On Dec 6, 2019, at 7:37 AM, Dave <[hidden email]> wrote:
> >
> > Actually at about that time the replication finished and added about
> 20-30gb to the index from the master.  My current set up goes
> > Indexing master -> indexer slave/production master (only replicated on
> command)-> three search slaves (replicate each 15 minutes)
> >
> > We added about 2.3m docs, then I replicated it to the production master
> and since there was a change it replicated out to the slave node the gc
> came from
> >
> > I’ll set one of the slaves to 31/31 and force all load to that one and
> see how she does. Thanks!
> >
> >
> >> On Dec 6, 2019, at 1:02 AM, Shawn Heisey <[hidden email]> wrote:
> >>
> >> On 12/5/2019 12:57 PM, David Hastings wrote:
> >>> That probably isnt enough data, so if youre interested:
> >>> https://gofile.io/?c=rZQ2y4
> >>
> >> The previous one was less than 4 minutes, so it doesn't reveal anything
> useful.
> >>
> >> This one is a little bit less than two hours.  That's more useful, but
> still pretty short.
> >>
> >> Here's the "heap after GC" graph from the larger file:
> >>
> >>
> https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0
> >>
> >> At around 14:15, the heap usage was rather high. It got up over 25GB.
> There were some very long GCs right at that time, which probably means they
> were full GCs.  And they didn't free up any significant amount of memory.
> So I'm betting that sometimes you actually *do* need a big chunk of that
> 60GB of heap.  You might try reducing it to 31g instead of 60000m.  Java's
> memory usage is a lot more efficient if the max heap size is less than 32
> GB.
> >>
> >> I can't give you any information about what happened at that time which
> required so much heap.  You could see if you have logfiles that cover that
> timeframe.
> >>
> >> Thanks,
> >> Shawn
>
>
Reply | Threaded
Open this post in threaded view
|

Re: xms/xmx choices

David Hastings
in case any one is interested, i made the memory changes as well as two
changes to
XX:ParallelGCThreads  8->20
XX:ConcGCThreads . 4->5

old:
https://gceasy.io/diamondgc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB

now:
https://gceasy.io/diamondgc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvOS8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMS02&channel=WEB

however there hasnt been really anything noticeable as far as solr itself
is concerned when it comes to qtimes,
pre java changes:
 43963 searches
Complete SOLR average : 5.33 / 10th seconds for SOLR
Raw SOLR over 10000/1000 secs : 208, 0.47%
Raw SOLR over 1000/1000 secs : 5261, 11.97%

post solr changes:
 28369 searches
Complete SOLR average : 4.77 / 10th seconds for SOLR
Raw SOLR over 10000/1000 secs : 94, 0.33%
Raw SOLR over 1000/1000 secs : 3583, 12.63%




On Fri, Dec 6, 2019 at 9:39 AM David Hastings <[hidden email]>
wrote:

> Thanks you guys, this has been educational, i uploaded up to now, the
> server was restarted after adding the extra memory, so
>
> https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB
>
> is what im looking at.  tuning the JVM is new to me, so im just going by
> what ive researched and what this site is saying.
> from what i can tell:
>   the peak looks like 31gb would be perfect, will implement that today
>   throughput is seems good, assuming gceasy recommendation of above 95% is
> the target and im at 99.6
>   latency looks like its as good as I really care to get, who really cares
> about 200ms
>   as far as heap after a GC it looks like it recovered well? or am i
> missing something?  the red spikes of a full GC like 28gb, and right after
> its down to 14gb
>
> I really appreciate this input, its educational/helpful
> -Dave
>
>
>
>
>
> On Fri, Dec 6, 2019 at 7:48 AM Erick Erickson <[hidden email]>
> wrote:
>
>> A replication shouldn’t have consumed that much heap. It’s mostly I/O,
>> just a write through. If replication really consumes huge amounts of heap
>> we need to look at that more closely. Personally I suspect/hope it’s
>> coincidental, but that’s only a guess. You can attach jconsole to the
>> running process and monitor heap usage in real-time, jconsole is part of
>> the JDK so should be relatively easy to install. It has a nifty “gc now”
>> button that you can use to see if the heap you’re accumulating is just
>> garbage or really accumulates…
>>
>> And if this really is related to replication and that much heap is
>> actually used, we need to figure out why. Shawn’s observation that there is
>> very little heap recovered is worrying.
>>
>> Best,
>> Erick
>>
>> > On Dec 6, 2019, at 7:37 AM, Dave <[hidden email]> wrote:
>> >
>> > Actually at about that time the replication finished and added about
>> 20-30gb to the index from the master.  My current set up goes
>> > Indexing master -> indexer slave/production master (only replicated on
>> command)-> three search slaves (replicate each 15 minutes)
>> >
>> > We added about 2.3m docs, then I replicated it to the production master
>> and since there was a change it replicated out to the slave node the gc
>> came from
>> >
>> > I’ll set one of the slaves to 31/31 and force all load to that one and
>> see how she does. Thanks!
>> >
>> >
>> >> On Dec 6, 2019, at 1:02 AM, Shawn Heisey <[hidden email]> wrote:
>> >>
>> >> On 12/5/2019 12:57 PM, David Hastings wrote:
>> >>> That probably isnt enough data, so if youre interested:
>> >>> https://gofile.io/?c=rZQ2y4
>> >>
>> >> The previous one was less than 4 minutes, so it doesn't reveal
>> anything useful.
>> >>
>> >> This one is a little bit less than two hours.  That's more useful, but
>> still pretty short.
>> >>
>> >> Here's the "heap after GC" graph from the larger file:
>> >>
>> >>
>> https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0
>> >>
>> >> At around 14:15, the heap usage was rather high. It got up over 25GB.
>> There were some very long GCs right at that time, which probably means they
>> were full GCs.  And they didn't free up any significant amount of memory.
>> So I'm betting that sometimes you actually *do* need a big chunk of that
>> 60GB of heap.  You might try reducing it to 31g instead of 60000m.  Java's
>> memory usage is a lot more efficient if the max heap size is less than 32
>> GB.
>> >>
>> >> I can't give you any information about what happened at that time
>> which required so much heap.  You could see if you have logfiles that cover
>> that timeframe.
>> >>
>> >> Thanks,
>> >> Shawn
>>
>>