[JENKINS] Lucene-Solr-repro - Build # 2658 - Unstable

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[JENKINS] Lucene-Solr-repro - Build # 2658 - Unstable

Apache Jenkins Server-2
Build: https://builds.apache.org/job/Lucene-Solr-repro/2658/

[...truncated 28 lines...]
[repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/428/consoleText

[repro] Revision: 91a07ce43555607d00814b08d34323efc0dadc84

[repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=ForceLeaderTest -Dtests.method=testReplicasInLIRNoLeader -Dtests.seed=E307B23DADC3C871 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=fr-CH -Dtests.timezone=Pacific/Guam -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: df119573dbc5781b2eed357821856b44bd7af5fd
[repro] git fetch
[repro] git checkout 91a07ce43555607d00814b08d34323efc0dadc84

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       ForceLeaderTest
[repro] ant compile-test

[...truncated 3605 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.ForceLeaderTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=E307B23DADC3C871 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=fr-CH -Dtests.timezone=Pacific/Guam -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 2281 lines...]
   [junit4]   2> 97050 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 97106 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/
   [junit4]   2> 97106 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422 START replicas=[http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/] nUpdates=100
   [junit4]   2> 97111 INFO  (qtp1721709302-226) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 97113 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422  Received 1 versions from http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/ fingerprint:null
   [junit4]   2> 97117 INFO  (qtp1721709302-228) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 97118 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422  No additional versions requested. ourHighThreshold=1622258848594132992 otherLowThreshold=1622258848594132992 ourHighest=1622258848594132992 otherHighest=1622258848594132992
   [junit4]   2> 97118 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422 DONE. sync succeeded
   [junit4]   2> 97119 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 97119 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/: try and ask http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/ to sync
   [junit4]   2> 97122 INFO  (qtp1721709302-227) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:41603 START replicas=[http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/] nUpdates=100
   [junit4]   2> 97126 INFO  (qtp1938834856-109) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 97126 INFO  (qtp1938834856-109) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 97128 INFO  (qtp1721709302-227) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 97128 INFO  (qtp1721709302-227) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync
   [junit4]   2> 97128 INFO  (qtp1721709302-227) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/&wt=javabin&version=2} status=0 QTime=7
   [junit4]   2> 97130 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/:  sync completed with http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/
   [junit4]   2> 97131 WARN  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t3 as down and I haven't recovered yet, so I shouldn't be the leader.
   [junit4]   2> 97132 ERROR (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership
   [junit4]   2> at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631)
   [junit4]   2> at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349)
   [junit4]   2> at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287)
   [junit4]   2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]   2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> at java.lang.Thread.run(Thread.java:748)
   [junit4]   2>
   [junit4]   2> 97132 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery
   [junit4]   2> 97133 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration.
   [junit4]   2> 97134 WARN  (updateExecutor-38-thread-2-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4]
   [junit4]   2> 97138 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 97138 WARN  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6]
   [junit4]   2> 97142 INFO  (zkCallback-42-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 97142 INFO  (zkCallback-101-thread-2) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 97142 INFO  (zkCallback-101-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 97142 INFO  (zkCallback-101-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 97142 INFO  (zkCallback-42-thread-5) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 97144 INFO  (zkCallback-42-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 97528 ERROR (indexFetcher-100-thread-1) [    ] o.a.s.h.ReplicationHandler Index fetch failed :org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: forceleader_test_collection slice: shard1 saw state=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/15)={
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "replicationFactor":"0",
   [junit4]   2>   "shards":{"shard1":{
   [junit4]   2>       "range":"80000000-7fffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node2":{
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t1",
   [junit4]   2>           "base_url":"http://127.0.0.1:33442",
   [junit4]   2>           "node_name":"127.0.0.1:33442_",
   [junit4]   2>           "state":"down",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node4":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:46422",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t3",
   [junit4]   2>           "node_name":"127.0.0.1:46422_",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"},
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "state":"down",
   [junit4]   2>           "base_url":"http://127.0.0.1:41603",
   [junit4]   2>           "core":"forceleader_test_collection_shard1_replica_t5",
   [junit4]   2>           "node_name":"127.0.0.1:41603_",
   [junit4]   2>           "force_set_state":"false",
   [junit4]   2>           "type":"TLOG"}}}},
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"1",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"0",
   [junit4]   2>   "tlogReplicas":"3"} with live_nodes=[127.0.0.1:44306_, 127.0.0.1:41603_, 127.0.0.1:46422_]
   [junit4]   2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:902)
   [junit4]   2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:879)
   [junit4]   2> at org.apache.solr.handler.IndexFetcher.getLeaderReplica(IndexFetcher.java:688)
   [junit4]   2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:381)
   [junit4]   2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346)
   [junit4]   2> at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425)
   [junit4]   2> at org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171)
   [junit4]   2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   [junit4]   2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   [junit4]   2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   [junit4]   2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> at java.lang.Thread.run(Thread.java:748)
   [junit4]   2>
   [junit4]   2> 97529 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false]
   [junit4]   2> 98053 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510
   [junit4]   2> 98053 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 98053 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 98055 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510
   [junit4]   2> 98055 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 98055 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 98056 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510
   [junit4]   2> 98057 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 98057 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 98058 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510
   [junit4]   2> 98058 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 98058 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 98060 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510
   [junit4]   2> 98060 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 98060 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 98061 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510
   [junit4]   2> 98061 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 98062 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 98548 INFO  (recoveryExecutor-39-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy RecoveryStrategy has been closed
   [junit4]   2> 98549 INFO  (recoveryExecutor-39-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false]
   [junit4]   2> 98549 INFO  (updateExecutor-38-thread-2-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 98549 INFO  (updateExecutor-38-thread-2-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 3648ms
   [junit4]   2> 99064 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510
   [junit4]   2> 99064 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 99064 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 99065 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510
   [junit4]   2> 99065 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 99065 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 99067 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510
   [junit4]   2> 99067 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 99067 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 99068 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510
   [junit4]   2> 99068 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 99069 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 99070 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510
   [junit4]   2> 99070 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 99070 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 99071 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510
   [junit4]   2> 99071 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 99071 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 99639 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/
   [junit4]   2> 99640 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:41603 START replicas=[http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/] nUpdates=100
   [junit4]   2> 99644 INFO  (qtp1938834856-113) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 99646 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:41603  Received 1 versions from http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/ fingerprint:null
   [junit4]   2> 99649 INFO  (qtp1938834856-112) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 99650 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:41603  No additional versions requested. ourHighThreshold=1622258848594132992 otherLowThreshold=1622258848594132992 ourHighest=1622258848594132992 otherHighest=1622258848594132992
   [junit4]   2> 99650 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:41603 DONE. sync succeeded
   [junit4]   2> 99651 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 99651 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/ to sync
   [junit4]   2> 99654 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422 START replicas=[http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/] nUpdates=100
   [junit4]   2> 99657 INFO  (qtp1721709302-225) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 99657 INFO  (qtp1721709302-225) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 99660 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 99660 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync
   [junit4]   2> 99660 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=7
   [junit4]   2> 99661 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/:  sync completed with http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/
   [junit4]   2> 99662 WARN  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t5 as down and I haven't recovered yet, so I shouldn't be the leader.
   [junit4]   2> 99663 ERROR (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership
   [junit4]   2> at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631)
   [junit4]   2> at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57)
   [junit4]   2> at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349)
   [junit4]   2> at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287)
   [junit4]   2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
   [junit4]   2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]   2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]   2> at java.lang.Thread.run(Thread.java:748)
   [junit4]   2>
   [junit4]   2> 99663 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery
   [junit4]   2> 99665 INFO  (zkCallback-101-thread-3) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration.
   [junit4]   2> 99665 WARN  (updateExecutor-97-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6]
   [junit4]   2> 99665 INFO  (updateExecutor-97-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 99665 INFO  (updateExecutor-97-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 5017ms
   [junit4]   2> 99669 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 99669 WARN  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4]
   [junit4]   2> 99672 INFO  (zkCallback-42-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 99672 INFO  (zkCallback-42-thread-5) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 99673 INFO  (zkCallback-42-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 99673 INFO  (zkCallback-101-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 99673 INFO  (zkCallback-101-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 99673 INFO  (zkCallback-101-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 100074 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510
   [junit4]   2> 100074 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 100074 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 100075 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510
   [junit4]   2> 100075 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 100076 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 100077 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510
   [junit4]   2> 100077 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 100077 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 100078 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510
   [junit4]   2> 100078 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 100078 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 100079 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510
   [junit4]   2> 100080 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 100080 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 100081 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510
   [junit4]   2> 100081 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 100081 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 100499 INFO  (qtp1721709302-229) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=2
   [junit4]   2> 100503 INFO  (qtp1721709302-226) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=1
   [junit4]   2> 100508 INFO  (qtp1721709302-228) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=1
   [junit4]   2> 100514 INFO  (qtp1721709302-227) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100528 INFO  (qtp1721709302-225) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=3
   [junit4]   2> 100534 INFO  (qtp1721709302-229) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100539 INFO  (qtp1721709302-226) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100544 INFO  (qtp1721709302-228) [n:127.0.0.1:41603_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100549 INFO  (qtp82421403-178) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 100552 INFO  (qtp82421403-180) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 100556 INFO  (qtp82421403-182) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 100561 INFO  (qtp82421403-181) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100565 INFO  (qtp82421403-179) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 100570 INFO  (qtp82421403-178) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 100574 INFO  (qtp82421403-180) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 100579 INFO  (qtp82421403-182) [n:127.0.0.1:44306_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1
   [junit4]   2> 100582 INFO  (SocketProxy-Acceptor-46422) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=53796,localport=46422], receiveBufferSize:531000
   [junit4]   2> 100583 INFO  (SocketProxy-Acceptor-46422) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=45029,localport=39990], receiveBufferSize=530904
   [junit4]   2> 100585 INFO  (qtp1938834856-109) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=1
   [junit4]   2> 100588 INFO  (qtp1938834856-113) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0
   [junit4]   2> 100591 INFO  (qtp1938834856-112) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=1
   [junit4]   2> 100597 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100602 INFO  (qtp1938834856-110) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100607 INFO  (qtp1938834856-109) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100612 INFO  (qtp1938834856-113) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100617 INFO  (qtp1938834856-112) [n:127.0.0.1:46422_    ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=2
   [junit4]   2> 100635 INFO  (AutoscalingActionExecutor-29-thread-1) [    ] o.a.s.c.a.ExecutePlanAction No operations to execute for event: {
   [junit4]   2>   "id":"1d477d189adb9eT41335fn3tg425nys44i5slg9q",
   [junit4]   2>   "source":".auto_add_replicas",
   [junit4]   2>   "eventTime":8241376933895070,
   [junit4]   2>   "eventType":"NODELOST",
   [junit4]   2>   "properties":{
   [junit4]   2>     "eventTimes":[8241376933895070],
   [junit4]   2>     "preferredOperation":"movereplica",
   [junit4]   2>     "_enqueue_time_":8241386951548841,
   [junit4]   2>     "nodeNames":["127.0.0.1:33442_"]}}
   [junit4]   2> 101083 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510
   [junit4]   2> 101084 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 101084 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 101085 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510
   [junit4]   2> 101086 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 101086 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 101088 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510
   [junit4]   2> 101088 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 101088 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 101090 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510
   [junit4]   2> 101090 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 101090 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 101091 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510
   [junit4]   2> 101092 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 101092 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 101100 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510
   [junit4]   2> 101100 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 101101 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ...
   [junit4]   2> 102103 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510
   [junit4]   2> 102103 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 102103 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 102104 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510
   [junit4]   2> 102104 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 102105 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 102106 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510
   [junit4]   2> 102106 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 102106 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 102107 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510
   [junit4]   2> 102108 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 102108 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 102109 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510
   [junit4]   2> 102109 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 102109 WARN  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server.
   [junit4]   2> 102110 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510
   [junit4]   2> 102111 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems
   [junit4]   2> 102111 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.AbstractFullDistribZkTestBase No more retries available! Add batch failed due to: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request.
   [junit4]   2> 102111 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.ForceLeaderTest Document couldn't be sent, which is expected.
   [junit4]   2> 102123 INFO  (zkConnectionManagerCallback-124-thread-1) [    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 102127 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3)
   [junit4]   2> 102132 INFO  (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[E307B23DADC3C871]) [    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:34582/solr ready
   [junit4]   2> 102134 INFO  (SocketProxy-Acceptor-44306) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=44340,localport=44306], receiveBufferSize:531000
   [junit4]   2> 102135 INFO  (SocketProxy-Acceptor-44306) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=38123,localport=47646], receiveBufferSize=530904
   [junit4]   2> 102137 INFO  (qtp82421403-179) [n:127.0.0.1:44306_    ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :forceleader with params action=FORCELEADER&collection=forceleader_test_collection&shard=shard1&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 102137 INFO  (qtp82421403-179) [n:127.0.0.1:44306_ c:forceleader_test_collection   ] o.a.s.h.a.CollectionsHandler Force leader invoked, state: znodeVersion: 10
   [junit4]   2> live nodes:[127.0.0.1:41603_, 127.0.0.1:44306_, 127.0.0.1:46422_]
   [junit4]   2> collections:{collection1=DocCollection(collection1//clusterstate.json/10)={
   [junit4]   2>   "pullReplicas":"0",
   [junit4]   2>   "replicationFactor":"1",
   [junit4]   2>   "shards":{
   [junit4]   2>     "shard1":{
   [junit4]   2>       "range":"80000000-ffffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{"core_node3":{
   [junit4]   2>           "core":"collection1_shard1_replica_n2",
   [junit4]   2>           "base_url":"http://127.0.0.1:33442",
   [junit4]   2>           "node_name":"127.0.0.1:33442_",
   [junit4]   2>           "state":"down",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "leader":"true"}}},
   [junit4]   2>     "shard2":{
   [junit4]   2>       "range":"0-7fffffff",
   [junit4]   2>       "state":"active",
   [junit4]   2>       "replicas":{
   [junit4]   2>         "core_node4":{
   [junit4]   2>           "core":"collection1_shard2_replica_n1",
   [junit4]   2>           "base_url":"http://127.0.0.1:44306",
   [junit4]   2>           "node_name":"127.0.0.1:44306_",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT",
   [junit4]   2>           "leader":"true"},
   [junit4]   2>         "core_node6":{
   [junit4]   2>           "core":"collection1_shard2_replica_n5",
   [junit4]   2>           "base_url":"http://127.0.0.1:41603",
   [junit4]   2>           "node_name":"127.0.0.1:41603_",
   [junit4]   2>           "state":"active",
   [junit4]   2>           "type":"NRT"}}}},
   [junit4]   2>   "router":{"name":"compositeId"},
   [junit4]   2>   "maxShardsPerNode":"1",
   [junit4]   2>   "autoAddReplicas":"false",
   [junit4]   2>   "nrtReplicas":"1",
   [junit4]   2>   "tlogReplicas":"0"}, control_collection=LazyCollectionRef(control_collection), forceleader_test_collection=LazyCollectionRef(forceleader_test_collection)}
   [junit4]   2> 102150 INFO  (qtp82421403-179) [n:127.0.0.1:44306_ c:forceleader_test_collection   ] o.a.s.h.a.CollectionsHandler Cleaning out LIR data, which was:     /collections/forceleader_test_collection/leader_initiated_recovery/shard1 (2)
   [junit4]   2>      /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node6 (0)
   [junit4]   2>      DATA:
   [junit4]   2>          {
   [junit4]   2>            "state":"down",
   [junit4]   2>            "createdByNodeName":"127.0.0.1:33442_",
   [junit4]   2>            "createdByCoreNodeName":"core_node2"}
   [junit4]   2>      /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node4 (0)
   [junit4]   2>      DATA:
   [junit4]   2>          {
   [junit4]   2>            "state":"down",
   [junit4]   2>            "createdByNodeName":"127.0.0.1:33442_",
   [junit4]   2>            "createdByCoreNodeName":"core_node2"}
   [junit4]   2>
   [junit4]   2> 102170 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/
   [junit4]   2> 102170 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422 START replicas=[http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/] nUpdates=100
   [junit4]   2> 102174 INFO  (qtp1721709302-227) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 102176 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422  Received 1 versions from http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/ fingerprint:null
   [junit4]   2> 102179 INFO  (qtp1721709302-225) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 102182 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422  No additional versions requested. ourHighThreshold=1622258848594132992 otherLowThreshold=1622258848594132992 ourHighest=1622258848594132992 otherHighest=1622258848594132992
   [junit4]   2> 102182 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:46422 DONE. sync succeeded
   [junit4]   2> 102183 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 102183 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/: try and ask http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/ to sync
   [junit4]   2> 102186 INFO  (qtp1721709302-229) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:41603 START replicas=[http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/] nUpdates=100
   [junit4]   2> 102189 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 102189 INFO  (qtp1938834856-111) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 102191 INFO  (qtp1721709302-229) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0}
   [junit4]   2> 102191 INFO  (qtp1721709302-229) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync
   [junit4]   2> 102191 INFO  (qtp1721709302-229) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5]  webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/&wt=javabin&version=2} status=0 QTime=6
   [junit4]   2> 102193 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/:  sync completed with http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/
   [junit4]   2> 102194 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t3 stopping background replication from leader
   [junit4]   2> 102198 INFO  (recoveryExecutor-39-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=false
   [junit4]   2> 102198 INFO  (recoveryExecutor-39-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t3 stopping background replication from leader
   [junit4]   2> 102202 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext Replaying tlog before become new leader
   [junit4]   2> 102202 WARN  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.UpdateLog Starting log replay tlog{file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.ForceLeaderTest_E307B23DADC3C871-001/control-002/cores/forceleader_test_collection_shard1_replica_t3/data/tlog/tlog.0000000000000000000 refcount=2} active=false starting pos=0 inSortedOrder=true
   [junit4]   2> 102212 INFO  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.DirectUpdateHandler2 start commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 102213 INFO  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@4b12e7d8 commitCommandVersion:0
   [junit4]   2> 102474 INFO  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.s.SolrIndexSearcher Opening [Searcher@ec8c47b[forceleader_test_collection_shard1_replica_t3] main]
   [junit4]   2> 102477 INFO  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 102485 INFO  (searcherExecutor-87-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SolrCore [forceleader_test_collection_shard1_replica_t3] Registered new searcher Searcher@ec8c47b[forceleader_test_collection_shard1_replica_t3] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):c1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-137-generic, timestamp=1547106614801}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 102486 INFO  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_test_collection_shard1_replica_t3] {add=[1 (1622258848594132992)]} 0 284
   [junit4]   2> 102487 WARN  (recoveryExecutor-89-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.UpdateLog Log replay finished. recoveryInfo=RecoveryInfo{adds=1 deletes=0 deleteByQuery=0 errors=0 positionOfStart=0}
   [junit4]   2> 102487 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/forceleader_test_collection/leaders/shard1/leader after winning as /collections/forceleader_test_collection/leader_elect/shard1/election/72597698715582468-core_node4-n_0000000005
   [junit4]   2> 102497 INFO  (zkCallback-101-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102497 INFO  (zkCallback-101-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102497 INFO  (zkCallback-42-thread-5) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102497 INFO  (zkCallback-42-thread-1) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/ shard1
   [junit4]   2> 102497 INFO  (zkCallback-101-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102497 INFO  (zkCallback-42-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102498 INFO  (zkCallback-42-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102499 WARN  (recoveryExecutor-39-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy We have not yet recovered - but we are now the leader!
   [junit4]   2> 102499 INFO  (recoveryExecutor-39-thread-1-processing-n:127.0.0.1:46422_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Finished recovery process.
   [junit4]   2> 102609 INFO  (zkCallback-42-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102609 INFO  (zkCallback-101-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102609 INFO  (zkCallback-101-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102610 INFO  (zkCallback-101-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102609 INFO  (zkCallback-42-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 102610 INFO  (zkCallback-42-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104683 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=false
   [junit4]   2> 104684 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t5 stopping background replication from leader
   [junit4]   2> 104686 INFO  (SocketProxy-Acceptor-46422) [    ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=53830,localport=46422], receiveBufferSize:531000
   [junit4]   2> 104687 INFO  (SocketProxy-Acceptor-46422) [    ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=45029,localport=40024], receiveBufferSize=530904
   [junit4]   2> 104719 INFO  (qtp1938834856-109) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/admin/ping params={wt=javabin&version=2} hits=1 status=0 QTime=29
   [junit4]   2> 104719 INFO  (qtp1938834856-109) [n:127.0.0.1:46422_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3]  webapp= path=/admin/ping params={wt=javabin&version=2} status=0 QTime=30
   [junit4]   2> 104721 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[forceleader_test_collection_shard1_replica_t5]
   [junit4]   2> 104722 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=tlog{file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.ForceLeaderTest_E307B23DADC3C871-001/shard-3-001/cores/forceleader_test_collection_shard1_replica_t5/data/tlog/tlog.0000000000000000000 refcount=1}}
   [junit4]   2> 104722 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Publishing state of core [forceleader_test_collection_shard1_replica_t5] as recovering, leader is [http://127.0.0.1:46422/forceleader_test_collection_shard1_replica_t3/] and I am [http://127.0.0.1:41603/forceleader_test_collection_shard1_replica_t5/]
   [junit4]   2> 104725 INFO  (recoveryExecutor-98-thread-1-processing-n:127.0.0.1:41603_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:41603_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:46422]; [WaitForState: action=PREPRECOVERY&core=forceleader_test_collection_shard1_replica_t3&nodeName=127.0.0.1:41603_&coreNodeName=core_node6&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 104729 INFO  (zkCallback-101-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104729 INFO  (zkCallback-101-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104729 INFO  (zkCallback-101-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104729 INFO  (zkCallback-42-thread-3) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104730 INFO  (zkCallback-42-thread-4) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104730 INFO  (zkCallback-42-thread-1) [    ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3])
   [junit4]   2> 104732 INFO  (qtp1938834856-113) [n:127.0.0.1:46422_    x:forceleader_test_collection_shard1_replica_t3] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node6, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true
   [junit4]   2> 104734 INFO  (qtp1938834856-113) [n:127.0.0.1:46422_    x:forceleader_test_collection_shard1_replica_t3] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t3, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:41603_, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"state":"down","base_url":"http://127.0.0.1:41603","core":"forceleader_test_collection_shard1_replica_t5","node_name":"127.0.0.1:41603_","force_set_state":"false","type":"TLOG"}
   [junit4]   2> 104734 INFO  (watches-44-thread-3) [    ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t3, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:41603_, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"core":"forceleader_test_collection_shard1_replica_t5","base_url":"http://127.0.0.1:41603","node_name":"127.0.0.1:41603_","state":"recovering","type":"TLOG"}
   [junit4]   2> 104735 INFO  (watches-44-thread-1) [    ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t3, leaderDoesNotNeedRecovery=false, isLeader? true, li

[...truncated too long message...]

-4) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (3)
   [junit4]   2> 116584 INFO  (closeThreadPool-152-thread-2) [    ] o.a.s.c.Overseer Overseer (id=72597828673208324-127.0.0.1:39870_s_d%2Fey-n_0000000000) closing
   [junit4]   2> 116590 INFO  (closeThreadPool-142-thread-7) [    ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jetty, tag=null
   [junit4]   2> 116592 INFO  (closeThreadPool-142-thread-7) [    ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@1bfa9cc1: rootName = null, domain = solr.jetty, service url = null, agent id = null] for registry solr.jetty / com.codahale.metrics.MetricRegistry@70c7d0d5
   [junit4]   2> 116594 INFO  (closeThreadPool-142-thread-7) [    ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.cluster, tag=null
   [junit4]   2> 116599 INFO  (closeThreadPool-142-thread-2) [    ] o.a.s.c.Overseer Overseer (id=72597828673208324-127.0.0.1:39870_s_d%2Fey-n_0000000000) closing
   [junit4]   2> 116600 INFO  (zkCallback-136-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (1)
   [junit4]   2> 116603 INFO  (zkCallback-129-thread-1) [    ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (1)
   [junit4]   2> 116603 INFO  (zkCallback-129-thread-3) [    ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:39823_s_d%2Fey
   [junit4]   2> 116658 INFO  (closeThreadPool-142-thread-8) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@19746ec5{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
   [junit4]   2> 116676 INFO  (closeThreadPool-142-thread-8) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@287a4866{/s_d/ey,null,UNAVAILABLE}
   [junit4]   2> 116691 INFO  (closeThreadPool-142-thread-8) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 116691 INFO  (closeThreadPool-142-thread-2) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@1ea67873{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
   [junit4]   2> 116693 WARN  (closeThreadPool-142-thread-8) [    ] o.a.s.c.s.c.SocketProxy Closing 12 connections to: http://127.0.0.1:46881/s_d/ey, target: http://127.0.0.1:43513/s_d/ey
   [junit4]   2> 116709 INFO  (closeThreadPool-142-thread-6) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@150168ac{HTTP/1.1,[http/1.1]}{127.0.0.1:0}
   [junit4]   2> 116712 INFO  (closeThreadPool-142-thread-2) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@33b3512a{/s_d/ey,null,UNAVAILABLE}
   [junit4]   2> 116718 INFO  (closeThreadPool-142-thread-6) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@1e6ab388{/s_d/ey,null,UNAVAILABLE}
   [junit4]   2> 116735 INFO  (closeThreadPool-142-thread-7) [    ] o.e.j.s.AbstractConnector Stopped ServerConnector@c3f0fef{HTTP/1.1,[http/1.1]}{127.0.0.1:35399}
   [junit4]   2> 116737 INFO  (closeThreadPool-142-thread-2) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 116737 INFO  (closeThreadPool-142-thread-6) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 116744 WARN  (closeThreadPool-142-thread-2) [    ] o.a.s.c.s.c.SocketProxy Closing 6 connections to: http://127.0.0.1:39870/s_d/ey, target: http://127.0.0.1:38999/s_d/ey
   [junit4]   2> 116745 INFO  (closeThreadPool-142-thread-7) [    ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@552d4580{/s_d/ey,null,UNAVAILABLE}
   [junit4]   2> 116746 WARN  (closeThreadPool-142-thread-6) [    ] o.a.s.c.s.c.SocketProxy Closing 6 connections to: http://127.0.0.1:43430/s_d/ey, target: http://127.0.0.1:43661/s_d/ey
   [junit4]   2> 116747 INFO  (closeThreadPool-142-thread-7) [    ] o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 116753 WARN  (closeThreadPool-142-thread-7) [    ] o.a.s.c.s.c.SocketProxy Closing 3 connections to: http://127.0.0.1:39823/s_d/ey, target: http://127.0.0.1:35399/s_d/ey
   [junit4]   2> 116754 INFO  (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[A2A8D262BB73690]) [    ] o.a.s.c.ZkTestServer Shutting down ZkTestServer.
   [junit4]   2> 116763 WARN  (ZkTestServer Run Thread) [    ] o.a.s.c.ZkTestServer Watch limit violations:
   [junit4]   2> Maximum concurrent create/delete watches above limit:
   [junit4]   2>
   [junit4]   2> 45 /solr/collections/forceleader_lower_terms_collection/terms/shard1
   [junit4]   2> 14 /solr/collections/collection1/terms/shard2
   [junit4]   2> 13 /solr/aliases.json
   [junit4]   2> 5 /solr/security.json
   [junit4]   2> 5 /solr/configs/conf1
   [junit4]   2> 2 /solr/collections/forceleader_lower_terms_collection/state.json
   [junit4]   2> 2 /solr/collections/collection1/terms/shard1
   [junit4]   2> 2 /solr/collections/control_collection/terms/shard1
   [junit4]   2>
   [junit4]   2> Maximum concurrent data watches above limit:
   [junit4]   2>
   [junit4]   2> 57 /solr/collections/collection1/state.json
   [junit4]   2> 36 /solr/collections/forceleader_lower_terms_collection/state.json
   [junit4]   2> 13 /solr/clusterprops.json
   [junit4]   2> 13 /solr/clusterstate.json
   [junit4]   2> 9 /solr/collections/control_collection/state.json
   [junit4]   2> 3 /solr/overseer_elect/election/72597828673208331-127.0.0.1:43430_s_d%2Fey-n_0000000001
   [junit4]   2> 2 /solr/collections/collection1/leader_elect/shard2/election/72597828673208338-core_node3-n_0000000000
   [junit4]   2>
   [junit4]   2> Maximum concurrent children watches above limit:
   [junit4]   2>
   [junit4]   2> 13 /solr/collections
   [junit4]   2> 11 /solr/live_nodes
   [junit4]   2>
   [junit4]   2> 116793 INFO  (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[A2A8D262BB73690]) [    ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:40089
   [junit4]   2> 116793 INFO  (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[A2A8D262BB73690]) [    ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 40089
   [junit4] OK      62.6s J2 | ForceLeaderTest.testReplicasInLowerTerms
   [junit4]   2> NOTE: leaving temporary files on disk at: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.ForceLeaderTest_A2A8D262BB73690-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {multiDefault=PostingsFormat(name=LuceneVarGapDocFreqInterval), a_t=PostingsFormat(name=LuceneVarGapDocFreqInterval), id=PostingsFormat(name=Memory), text=PostingsFormat(name=Direct)}, docValues:{range_facet_l_dv=DocValuesFormat(name=Lucene70), _version_=DocValuesFormat(name=Memory), intDefault=DocValuesFormat(name=Memory), id_i1=DocValuesFormat(name=Memory), range_facet_i_dv=DocValuesFormat(name=Lucene70), intDvoDefault=DocValuesFormat(name=Lucene70), range_facet_l=DocValuesFormat(name=Lucene70), timestamp=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=964, maxMBSortInHeap=7.245119178221359, sim=RandomSimilarity(queryNorm=true): {}, locale=fr-CH, timezone=Pacific/Guam
   [junit4]   2> NOTE: Linux 4.4.0-137-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=1,free=137880832,total=329777152
   [junit4]   2> NOTE: All tests run in this JVM: [ForceLeaderTest]
   [junit4] Completed [3/5 (3!)] on J2 in 113.68s, 3 tests, 1 error, 1 skipped <<< FAILURES!
   [junit4]
   [junit4] Suite: org.apache.solr.cloud.ForceLeaderTest
   [junit4] OK      54.4s J0 | ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4] OK      45.1s J0 | ForceLeaderTest.testReplicasInLowerTerms
   [junit4] Completed [4/5 (3!)] on J0 in 101.11s, 3 tests, 1 skipped
   [junit4]
   [junit4] Suite: org.apache.solr.cloud.ForceLeaderTest
   [junit4] OK      55.6s J1 | ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4] OK      44.9s J1 | ForceLeaderTest.testReplicasInLowerTerms
   [junit4] Completed [5/5 (3!)] on J1 in 101.99s, 3 tests, 1 skipped
   [junit4]
   [junit4]
   [junit4] Tests with failures [seed: A2A8D262BB73690]:
   [junit4]   - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4]   - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4]   - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader
   [junit4]
   [junit4]
   [junit4] JVM J0:     1.84 ..   218.01 =   216.18s
   [junit4] JVM J1:     1.75 ..   221.34 =   219.60s
   [junit4] JVM J2:     1.72 ..   119.27 =   117.54s
   [junit4] Execution time total: 3 minutes 41 seconds
   [junit4] Tests summary: 5 suites, 15 tests, 3 errors, 5 ignored

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 5 suites, 15 tests, 3 errors, 5 ignored [seed: A2A8D262BB73690]

Total time: 3 minutes 46 seconds

[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   3/5 failed: org.apache.solr.cloud.ForceLeaderTest
[repro] git checkout df119573dbc5781b2eed357821856b44bd7af5fd
Note: checking out 'df119573dbc5781b2eed357821856b44bd7af5fd'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at df11957... SOLR-12888: Run URP now auto-registers NestedUpdateProcessor before it.
[repro] Exiting with code 256
Archiving artifacts
[Fast Archiver] No artifacts from Lucene-Solr-repro Repro-Lucene-Solr-Tests-7.x#1202 to compare, so performing full copy of artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]