[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 461 - Unstable

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 461 - Unstable

Apache Jenkins Server-2
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/461/

1 tests failed.
FAILED:  org.apache.solr.cloud.ReindexCollectionTest.testSameTargetReindexing

Error Message:
Solr11035BandAid failed, counts differ after updates: expected:<199> but was:<200>

Stack Trace:
java.lang.AssertionError: Solr11035BandAid failed, counts differ after updates: expected:<199> but was:<200>
        at __randomizedtesting.SeedInfo.seed([15B345B130871AD:B422DF8C146EE9E9]:0)
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:834)
        at org.junit.Assert.assertEquals(Assert.java:645)
        at org.apache.solr.SolrTestCaseJ4.Solr11035BandAid(SolrTestCaseJ4.java:3144)
        at org.apache.solr.cloud.ReindexCollectionTest.indexDocs(ReindexCollectionTest.java:405)
        at org.apache.solr.cloud.ReindexCollectionTest.doTestSameTargetReindexing(ReindexCollectionTest.java:166)
        at org.apache.solr.cloud.ReindexCollectionTest.testSameTargetReindexing(ReindexCollectionTest.java:158)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
        at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
        at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
        at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
        at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
        at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
        at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
        at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
        at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:
[...truncated 13196 lines...]
   [junit4] Suite: org.apache.solr.cloud.ReindexCollectionTest
   [junit4]   2> 364730 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 364744 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.SolrTestCaseJ4 Created dataDir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/data-dir-48-001
   [junit4]   2> 364744 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 364745 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, clientAuth=0.0/0.0)
   [junit4]   2> 364746 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001
   [junit4]   2> 364746 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 364747 INFO  (ZkTestServer Run Thread) [     ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 364764 INFO  (ZkTestServer Run Thread) [     ] o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 364852 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.ZkTestServer start zk server on port:35572
   [junit4]   2> 364853 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.ZkTestServer waitForServerUp: 127.0.0.1:35572
   [junit4]   2> 364853 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:35572
   [junit4]   2> 364853 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 35572
   [junit4]   2> 364889 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 364926 INFO  (zkConnectionManagerCallback-1002-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 364926 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 365128 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 365258 INFO  (zkConnectionManagerCallback-1004-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 365274 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 365291 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 365334 INFO  (zkConnectionManagerCallback-1006-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 365334 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 365539 WARN  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 365540 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 365540 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 365540 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: afcf563148970e98786327af5e07c261fda175d3; jvm 11.0.1+13-LTS
   [junit4]   2> 365540 WARN  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 365541 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 365541 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 365541 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: afcf563148970e98786327af5e07c261fda175d3; jvm 11.0.1+13-LTS
   [junit4]   2> 365594 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 365594 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 365594 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 365603 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@7fb26f6a{/solr,null,AVAILABLE}
   [junit4]   2> 365603 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.AbstractConnector Started ServerConnector@13620fb{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:36512}
   [junit4]   2> 365603 INFO  (jetty-launcher-1007-thread-2) [     ] o.e.j.s.Server Started @365641ms
   [junit4]   2> 365603 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=36512}
   [junit4]   2> 365604 ERROR (jetty-launcher-1007-thread-2) [     ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 365604 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 365604 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 9.0.0
   [junit4]   2> 365604 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 365604 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 365604 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2019-08-30T22:57:18.100298Z
   [junit4]   2> 365618 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 365801 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 365801 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 365801 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 365802 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@32f1816c{/solr,null,AVAILABLE}
   [junit4]   2> 365803 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.AbstractConnector Started ServerConnector@6b7dd646{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:37284}
   [junit4]   2> 365803 INFO  (jetty-launcher-1007-thread-1) [     ] o.e.j.s.Server Started @365841ms
   [junit4]   2> 365803 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=37284}
   [junit4]   2> 365804 ERROR (jetty-launcher-1007-thread-1) [     ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete.
   [junit4]   2> 365804 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 365804 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 9.0.0
   [junit4]   2> 365804 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 365804 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 365804 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 2019-08-30T22:57:18.300783Z
   [junit4]   2> 365856 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 365857 INFO  (zkConnectionManagerCallback-1009-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 365857 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 365879 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 365895 INFO  (zkConnectionManagerCallback-1011-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 365896 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 365946 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 366479 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=true]
   [junit4]   2> 366530 WARN  (jetty-launcher-1007-thread-1) [     ] o.e.j.u.s.S.config Trusting all certificates configured for Client@54d6900b[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 366531 WARN  (jetty-launcher-1007-thread-1) [     ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@54d6900b[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 366551 WARN  (jetty-launcher-1007-thread-1) [     ] o.e.j.u.s.S.config Trusting all certificates configured for Client@71ca87ed[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 366551 WARN  (jetty-launcher-1007-thread-1) [     ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@71ca87ed[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 366552 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35572/solr
   [junit4]   2> 366567 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 366674 INFO  (zkConnectionManagerCallback-1019-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 366676 INFO  (jetty-launcher-1007-thread-1) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 366935 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 366991 INFO  (zkConnectionManagerCallback-1021-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 366991 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 367425 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=true]
   [junit4]   2> 367502 WARN  (jetty-launcher-1007-thread-2) [     ] o.e.j.u.s.S.config Trusting all certificates configured for Client@27bd1e29[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 367502 WARN  (jetty-launcher-1007-thread-2) [     ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@27bd1e29[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 367521 WARN  (jetty-launcher-1007-thread-2) [     ] o.e.j.u.s.S.config Trusting all certificates configured for Client@3be3fbc4[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 367521 WARN  (jetty-launcher-1007-thread-2) [     ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for Client@3be3fbc4[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 367541 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:37284_solr
   [junit4]   2> 367555 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.Overseer Overseer (id=75304459696537606-127.0.0.1:37284_solr-n_0000000000) starting
   [junit4]   2> 367574 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35572/solr
   [junit4]   2> 367577 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 367611 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 367627 INFO  (zkConnectionManagerCallback-1034-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 367627 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 367639 INFO  (zkConnectionManagerCallback-1029-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 367639 INFO  (jetty-launcher-1007-thread-2) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 367776 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35572/solr ready
   [junit4]   2> 367793 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 367816 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:37284_solr
   [junit4]   2> 367843 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:37284_solr
   [junit4]   2> 367843 INFO  (zkConnectionManagerCallback-1036-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 367843 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 367903 INFO  (zkCallback-1020-thread-1) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 367904 INFO  (zkCallback-1033-thread-1) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 367905 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.PackageManager clusterprops.json changed , version -1
   [junit4]   2> 367946 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 367962 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.ZkController Publish node=127.0.0.1:36512_solr as DOWN
   [junit4]   2> 367980 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 367980 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:36512_solr
   [junit4]   2> 367982 INFO  (zkCallback-1020-thread-1) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 367983 INFO  (zkCallback-1033-thread-1) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 367999 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 368024 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 368061 INFO  (zkCallback-1035-thread-1) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 368062 INFO  (zkConnectionManagerCallback-1044-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 368062 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 368064 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 368077 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35572/solr ready
   [junit4]   2> 368077 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.PackageManager clusterprops.json changed , version -1
   [junit4]   2> 368135 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37284.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 368172 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37284.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 368173 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37284.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 368174 INFO  (jetty-launcher-1007-thread-1) [n:127.0.0.1:37284_solr     ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node1/.
   [junit4]   2> 368183 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory.
   [junit4]   2> 368347 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36512.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 368419 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36512.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 368420 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36512.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 368421 INFO  (jetty-launcher-1007-thread-2) [n:127.0.0.1:36512_solr     ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node2/.
   [junit4]   2> 368656 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.MiniSolrCloudCluster waitForAllNodes: numServers=2
   [junit4]   2> 368669 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 368701 INFO  (zkConnectionManagerCallback-1050-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 368701 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 368703 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 368704 INFO  (SUITE-ReindexCollectionTest-seed#[15B345B130871AD]-worker) [     ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35572/solr ready
   [junit4]   2> 368845 INFO  (TEST-ReindexCollectionTest.testReshapeReindexing-seed#[15B345B130871AD]) [     ] o.a.s.SolrTestCaseJ4 ###Starting testReshapeReindexing
   [junit4]   2> 368863 INFO  (TEST-ReindexCollectionTest.testReshapeReindexing-seed#[15B345B130871AD]) [     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 368903 INFO  (zkConnectionManagerCallback-1055-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 368903 INFO  (TEST-ReindexCollectionTest.testReshapeReindexing-seed#[15B345B130871AD]) [     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 368917 INFO  (TEST-ReindexCollectionTest.testReshapeReindexing-seed#[15B345B130871AD]) [     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 368918 INFO  (TEST-ReindexCollectionTest.testReshapeReindexing-seed#[15B345B130871AD]) [     ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35572/solr ready
   [junit4]   2> 368934 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr     ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params collection.configName=conf1&maxShardsPerNode=-1&name=reshapeReindexing&nrtReplicas=2&action=CREATE&numShards=2&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 368970 INFO  (OverseerThreadFactory-1133-thread-1-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.a.c.CreateCollectionCmd Create collection reshapeReindexing
   [junit4]   2> 369123 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexing",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"reshapeReindexing_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:36512/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"}
   [junit4]   2> 369140 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexing",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"reshapeReindexing_shard1_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:37284/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"}
   [junit4]   2> 369142 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexing",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"reshapeReindexing_shard2_replica_n4",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:36512/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"}
   [junit4]   2> 369232 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexing",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"reshapeReindexing_shard2_replica_n6",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:37284/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"}
   [junit4]   2> 369510 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr    x:reshapeReindexing_shard1_replica_n1 ] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&coreNodeName=core_node3&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard1_replica_n1&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 369550 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr    x:reshapeReindexing_shard2_replica_n4 ] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard2_replica_n4&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard2&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 369551 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr    x:reshapeReindexing_shard2_replica_n6 ] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard2_replica_n6&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard2&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 369551 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr    x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 369630 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr    x:reshapeReindexing_shard1_replica_n2 ] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&coreNodeName=core_node5&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard1_replica_n2&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 370758 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0
   [junit4]   2> 370776 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0
   [junit4]   2> 370852 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0
   [junit4]   2> 370940 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.s.IndexSchema [reshapeReindexing_shard2_replica_n4] Schema name=minimal
   [junit4]   2> 371049 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 371049 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.CoreContainer Creating SolrCore 'reshapeReindexing_shard2_replica_n4' using configuration from collection reshapeReindexing, trusted=true
   [junit4]   2> 371049 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36512.solr.core.reshapeReindexing.shard2.replica_n4' (registry 'solr.core.reshapeReindexing.shard2.replica_n4') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 371050 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.SolrCore [[reshapeReindexing_shard2_replica_n4] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node2/reshapeReindexing_shard2_replica_n4], dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node2/./reshapeReindexing_shard2_replica_n4/data/]
   [junit4]   2> 371085 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.s.IndexSchema [reshapeReindexing_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 371121 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 371123 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.s.IndexSchema [reshapeReindexing_shard1_replica_n2] Schema name=minimal
   [junit4]   2> 371141 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.CoreContainer Creating SolrCore 'reshapeReindexing_shard1_replica_n1' using configuration from collection reshapeReindexing, trusted=true
   [junit4]   2> 371141 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_36512.solr.core.reshapeReindexing.shard1.replica_n1' (registry 'solr.core.reshapeReindexing.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 371141 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.SolrCore [[reshapeReindexing_shard1_replica_n1] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node2/reshapeReindexing_shard1_replica_n1], dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node2/./reshapeReindexing_shard1_replica_n1/data/]
   [junit4]   2> 371178 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 371178 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.CoreContainer Creating SolrCore 'reshapeReindexing_shard1_replica_n2' using configuration from collection reshapeReindexing, trusted=true
   [junit4]   2> 371179 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37284.solr.core.reshapeReindexing.shard1.replica_n2' (registry 'solr.core.reshapeReindexing.shard1.replica_n2') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 371179 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.SolrCore [[reshapeReindexing_shard1_replica_n2] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node1/reshapeReindexing_shard1_replica_n2], dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node1/./reshapeReindexing_shard1_replica_n2/data/]
   [junit4]   2> 371355 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0
   [junit4]   2> 371429 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.s.IndexSchema [reshapeReindexing_shard2_replica_n6] Schema name=minimal
   [junit4]   2> 371431 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 371431 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.CoreContainer Creating SolrCore 'reshapeReindexing_shard2_replica_n6' using configuration from collection reshapeReindexing, trusted=true
   [junit4]   2> 371444 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_37284.solr.core.reshapeReindexing.shard2.replica_n6' (registry 'solr.core.reshapeReindexing.shard2.replica_n6') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@3ef1167b
   [junit4]   2> 371445 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.SolrCore [[reshapeReindexing_shard2_replica_n6] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node1/reshapeReindexing_shard2_replica_n6], dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.ReindexCollectionTest_15B345B130871AD-001/tempDir-001/node1/./reshapeReindexing_shard2_replica_n6/data/]
   [junit4]   2> 371769 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 371769 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 371771 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 371771 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 371772 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@237a5579[reshapeReindexing_shard1_replica_n2] main]
   [junit4]   2> 371823 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 371829 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 371830 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 371830 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1643334435419979776
   [junit4]   2> 371839 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 371839 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 371841 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 371841 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 371859 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 371859 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 371859 INFO  (searcherExecutor-1144-thread-1-processing-n:127.0.0.1:37284_solr x:reshapeReindexing_shard1_replica_n2 c:reshapeReindexing s:shard1 r:core_node5) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.SolrCore [reshapeReindexing_shard1_replica_n2] Registered new searcher Searcher@237a5579[reshapeReindexing_shard1_replica_n2] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 371860 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 371860 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 371894 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@71b12893[reshapeReindexing_shard2_replica_n4] main]
   [junit4]   2> 371896 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@6079ed5b[reshapeReindexing_shard1_replica_n1] main]
   [junit4]   2> 371943 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/reshapeReindexing/terms/shard1 to Terms{values={core_node5=0}, version=0}
   [junit4]   2> 371943 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/reshapeReindexing/leaders/shard1
   [junit4]   2> 371976 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 371976 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 371976 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 371977 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 371977 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1643334435574120448
   [junit4]   2> 371979 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 371979 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 371979 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1643334435576217600
   [junit4]   2> 372044 INFO  (searcherExecutor-1143-thread-1-processing-n:127.0.0.1:36512_solr x:reshapeReindexing_shard1_replica_n1 c:reshapeReindexing s:shard1 r:core_node3) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.SolrCore [reshapeReindexing_shard1_replica_n1] Registered new searcher Searcher@6079ed5b[reshapeReindexing_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 372045 INFO  (searcherExecutor-1142-thread-1-processing-n:127.0.0.1:36512_solr x:reshapeReindexing_shard2_replica_n4 c:reshapeReindexing s:shard2 r:core_node7) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.SolrCore [reshapeReindexing_shard2_replica_n4] Registered new searcher Searcher@71b12893[reshapeReindexing_shard2_replica_n4] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 372084 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for shard shard1: total=2 found=1 timeoutin=9975ms
   [junit4]   2> 372085 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/reshapeReindexing/terms/shard1 to Terms{values={core_node3=0, core_node5=0}, version=1}
   [junit4]   2> 372085 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/reshapeReindexing/leaders/shard1
   [junit4]   2> 372131 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/reshapeReindexing/terms/shard2 to Terms{values={core_node7=0}, version=0}
   [junit4]   2> 372170 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/reshapeReindexing/leaders/shard2
   [junit4]   2> 372207 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for shard shard2: total=2 found=1 timeoutin=9982ms
   [junit4]   2> 372246 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 372246 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 372280 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 372280 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 372282 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@3fe0e69d[reshapeReindexing_shard2_replica_n6] main]
   [junit4]   2> 372317 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 372318 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 372318 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 372318 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1643334435931684864
   [junit4]   2> 372353 INFO  (searcherExecutor-1145-thread-1-processing-n:127.0.0.1:37284_solr x:reshapeReindexing_shard2_replica_n6 c:reshapeReindexing s:shard2 r:core_node8) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.SolrCore [reshapeReindexing_shard2_replica_n6] Registered new searcher Searcher@3fe0e69d[reshapeReindexing_shard2_replica_n6] main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 372372 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/reshapeReindexing/terms/shard2 to Terms{values={core_node7=0, core_node8=0}, version=1}
   [junit4]   2> 372389 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/reshapeReindexing/leaders/shard2
   [junit4]   2> 372588 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 372589 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 372589 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/
   [junit4]   2> 372590 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.PeerSync PeerSync: core=reshapeReindexing_shard1_replica_n2 url=http://127.0.0.1:37284/solr START replicas=[http://127.0.0.1:36512/solr/reshapeReindexing_shard1_replica_n1/] nUpdates=100
   [junit4]   2> 372590 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.PeerSync PeerSync: core=reshapeReindexing_shard1_replica_n2 url=http://127.0.0.1:37284/solr DONE.  We have no versions.  sync failed.
   [junit4]   2> 372594 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 372611 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.SyncStrategy Leader's attempt to sync with shard failed, moving to the next candidate
   [junit4]   2> 372611 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 372611 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/reshapeReindexing/leaders/shard1/leader after winning as /collections/reshapeReindexing/leader_elect/shard1/election/75304459696537606-core_node5-n_0000000000
   [junit4]   2> 372648 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/ shard1
   [junit4]   2> 372717 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 372717 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 372717 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/
   [junit4]   2> 372718 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.PeerSync PeerSync: core=reshapeReindexing_shard2_replica_n4 url=http://127.0.0.1:36512/solr START replicas=[http://127.0.0.1:37284/solr/reshapeReindexing_shard2_replica_n6/] nUpdates=100
   [junit4]   2> 372719 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.PeerSync PeerSync: core=reshapeReindexing_shard2_replica_n4 url=http://127.0.0.1:36512/solr DONE.  We have no versions.  sync failed.
   [junit4]   2> 372730 INFO  (qtp1706701953-2437) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.S.Request [reshapeReindexing_shard2_replica_n6]  webapp=/solr path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 372732 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.SyncStrategy Leader's attempt to sync with shard failed, moving to the next candidate
   [junit4]   2> 372732 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 372732 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/reshapeReindexing/leaders/shard2/leader after winning as /collections/reshapeReindexing/leader_elect/shard2/election/75304459696537609-core_node7-n_0000000000
   [junit4]   2> 372750 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/ shard2
   [junit4]   2> 372864 INFO  (zkCallback-1020-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 372864 INFO  (zkCallback-1035-thread-2) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 372866 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 372883 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 372904 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard2_replica_n4&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard2&wt=javabin&version=2&replicaType=NRT} status=0 QTime=3354
   [junit4]   2> 372906 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&coreNodeName=core_node5&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard1_replica_n2&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=3276
   [junit4]   2> 373021 INFO  (zkCallback-1035-thread-2) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373023 INFO  (zkCallback-1020-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373028 INFO  (zkCallback-1035-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373028 INFO  (zkCallback-1020-thread-3) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373168 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&coreNodeName=core_node3&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard1_replica_n1&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=3659
   [junit4]   2> 373305 INFO  (zkCallback-1035-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373305 INFO  (zkCallback-1035-thread-2) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373305 INFO  (zkCallback-1020-thread-3) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373305 INFO  (zkCallback-1020-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373408 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&newCollection=true&name=reshapeReindexing_shard2_replica_n6&action=CREATE&numShards=2&collection=reshapeReindexing&shard=shard2&wt=javabin&version=2&replicaType=NRT} status=0 QTime=3858
   [junit4]   2> 373456 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr     ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 45 seconds. Check all shard replicas
   [junit4]   2> 373588 INFO  (zkCallback-1020-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373588 INFO  (zkCallback-1020-thread-2) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373599 INFO  (zkCallback-1035-thread-2) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373599 INFO  (zkCallback-1035-thread-1) [     ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/reshapeReindexing/state.json] for collection [reshapeReindexing] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 373600 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr     ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={collection.configName=conf1&maxShardsPerNode=-1&name=reshapeReindexing&nrtReplicas=2&action=CREATE&numShards=2&wt=javabin&version=2} status=0 QTime=4666
   [junit4]   2> 373601 INFO  (TEST-ReindexCollectionTest.testReshapeReindexing-seed#[15B345B130871AD]) [     ] o.a.s.c.MiniSolrCloudCluster waitForActiveCollection: reshapeReindexing
   [junit4]   2> 374553 INFO  (qtp1706701953-2440) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/reshapeReindexing/terms/shard1 to Terms{values={core_node3=1, core_node5=1}, version=2}
   [junit4]   2> 374645 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/update params={update.distrib=FROMLEADER&distrib.from=http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&wt=javabin&version=2}{add=[0 (1643334437381865472), 1 (1643334437402836992), 4 (1643334437457362944), 8 (1643334437470994432), 10 (1643334437472043008), 11 (1643334437472043009), 12 (1643334437472043010), 13 (1643334437473091584), 14 (1643334437473091585), 15 (1643334437473091586), ... (103 adds)]} 0 614
   [junit4]   2> 374764 INFO  (qtp1706701953-2440) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard1_replica_n2]  webapp=/solr path=/update params={wt=javabin&version=2}{add=[0 (1643334437381865472), 1 (1643334437402836992), 4 (1643334437457362944), 8 (1643334437470994432), 10 (1643334437472043008), 11 (1643334437472043009), 12 (1643334437472043010), 13 (1643334437473091584), 14 (1643334437473091585), 15 (1643334437473091586), ... (103 adds)]} 0 1095
   [junit4]   2> 374868 INFO  (qtp267509621-2428) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/reshapeReindexing/terms/shard2 to Terms{values={core_node7=1, core_node8=1}, version=2}
   [junit4]   2> 374877 INFO  (qtp1706701953-2438) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard2_replica_n6]  webapp=/solr path=/update params={update.distrib=FROMLEADER&distrib.from=http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/&wt=javabin&version=2}{add=[2 (1643334437372428288), 3 (1643334437458411520), 5 (1643334437458411521), 6 (1643334437459460096), 7 (1643334437459460097), 9 (1643334437459460098), 17 (1643334437460508672), 18 (1643334437460508673), 19 (1643334437460508674), 21 (1643334437461557248), ... (97 adds)]} 0 591
   [junit4]   2> 374878 INFO  (qtp267509621-2428) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/update params={wt=javabin&version=2}{add=[2 (1643334437372428288), 3 (1643334437458411520), 5 (1643334437458411521), 6 (1643334437459460096), 7 (1643334437459460097), 9 (1643334437459460098), 17 (1643334437460508672), 18 (1643334437460508673), 19 (1643334437460508674), 21 (1643334437461557248), ... (97 adds)]} 0 1206
   [junit4]   2> 374975 INFO  (qtp1706701953-2436) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1643334438716702720,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 374975 INFO  (qtp1706701953-2436) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@2ac33f91 commitCommandVersion:1643334438716702720
   [junit4]   2> 374997 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1643334438740819968,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 374997 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@7931e0d0 commitCommandVersion:1643334438740819968
   [junit4]   2> 375277 INFO  (qtp1706701953-2436) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@33007296[reshapeReindexing_shard1_replica_n2] main]
   [junit4]   2> 375282 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@6bd1671d[reshapeReindexing_shard1_replica_n1] main]
   [junit4]   2> 375290 INFO  (searcherExecutor-1143-thread-1-processing-n:127.0.0.1:36512_solr x:reshapeReindexing_shard1_replica_n1 c:reshapeReindexing s:shard1 r:core_node3) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.SolrCore [reshapeReindexing_shard1_replica_n1] Registered new searcher Searcher@6bd1671d[reshapeReindexing_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(9.0.0):C103:[diagnostics={java.vendor=Oracle Corporation, timestamp=1567205847623, java.version=11.0.1, java.vm.version=11.0.1+13-LTS, lucene.version=9.0.0, source=flush, os.arch=amd64, java.runtime.version=11.0.1+13-LTS, os.version=4.4.0-112-generic, os=Linux}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 375327 INFO  (searcherExecutor-1144-thread-1-processing-n:127.0.0.1:37284_solr x:reshapeReindexing_shard1_replica_n2 c:reshapeReindexing s:shard1 r:core_node5) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.SolrCore [reshapeReindexing_shard1_replica_n2] Registered new searcher Searcher@33007296[reshapeReindexing_shard1_replica_n2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(9.0.0):C103:[diagnostics={java.vendor=Oracle Corporation, timestamp=1567205847633, java.version=11.0.1, java.vm.version=11.0.1+13-LTS, lucene.version=9.0.0, source=flush, os.arch=amd64, java.runtime.version=11.0.1+13-LTS, os.version=4.4.0-112-generic, os=Linux}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 375342 INFO  (OverseerCollectionConfigSetProcessor-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may have disconnected from ZooKeeper
   [junit4]   2> 375343 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 375343 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 346
   [junit4]   2> 375363 INFO  (qtp1706701953-2436) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 375472 INFO  (qtp1706701953-2436) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard1_replica_n2]  webapp=/solr path=/update params={update.distrib=TOLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/&commit_end_point=leaders&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 498
   [junit4]   2> 375581 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1643334439353188352,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 375581 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@3acbbe28 commitCommandVersion:1643334439353188352
   [junit4]   2> 375598 INFO  (qtp1706701953-2437) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1643334439371014144,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 375674 INFO  (qtp1706701953-2437) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@66c510f7 commitCommandVersion:1643334439371014144
   [junit4]   2> 375875 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@40796f9a[reshapeReindexing_shard2_replica_n4] main]
   [junit4]   2> 375876 INFO  (searcherExecutor-1142-thread-1-processing-n:127.0.0.1:36512_solr x:reshapeReindexing_shard2_replica_n4 c:reshapeReindexing s:shard2 r:core_node7) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.SolrCore [reshapeReindexing_shard2_replica_n4] Registered new searcher Searcher@40796f9a[reshapeReindexing_shard2_replica_n4] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(9.0.0):C97:[diagnostics={java.vendor=Oracle Corporation, timestamp=1567205848288, java.version=11.0.1, java.vm.version=11.0.1+13-LTS, lucene.version=9.0.0, source=flush, os.arch=amd64, java.runtime.version=11.0.1+13-LTS, os.version=4.4.0-112-generic, os=Linux}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 375909 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 375936 INFO  (qtp1706701953-2437) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.s.SolrIndexSearcher Opening [Searcher@29553275[reshapeReindexing_shard2_replica_n6] main]
   [junit4]   2> 375952 INFO  (searcherExecutor-1145-thread-1-processing-n:127.0.0.1:37284_solr x:reshapeReindexing_shard2_replica_n6 c:reshapeReindexing s:shard2 r:core_node8) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.c.SolrCore [reshapeReindexing_shard2_replica_n6] Registered new searcher Searcher@29553275[reshapeReindexing_shard2_replica_n6] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(9.0.0):C97:[diagnostics={java.vendor=Oracle Corporation, timestamp=1567205848410, java.version=11.0.1, java.vm.version=11.0.1+13-LTS, lucene.version=9.0.0, source=flush, os.arch=amd64, java.runtime.version=11.0.1+13-LTS, os.version=4.4.0-112-generic, os=Linux}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))}
   [junit4]   2> 375952 INFO  (qtp1706701953-2437) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 375952 INFO  (qtp1706701953-2437) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard2 r:core_node8 x:reshapeReindexing_shard2_replica_n6 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard2_replica_n6]  webapp=/solr path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 354
   [junit4]   2> 375967 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.u.p.LogUpdateProcessorFactory [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/update params={_stateVer_=reshapeReindexing:8&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 1029
   [junit4]   2> 376079 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/select params={df=text&distrib=false&_stateVer_=reshapeReindexing:8&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard1_replica_n1/|http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&rows=10&version=2&q=*:*&omitHeader=false&NOW=1567205848467&isShard=true&wt=javabin} hits=103 status=0 QTime=0
   [junit4]   2> 376105 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.S.Request [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/select params={df=text&distrib=false&_stateVer_=reshapeReindexing:8&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/|http://127.0.0.1:37284/solr/reshapeReindexing_shard2_replica_n6/&rows=10&version=2&q=*:*&omitHeader=false&NOW=1567205848467&isShard=true&wt=javabin} hits=97 status=0 QTime=25
   [junit4]   2> 376210 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/select params={q=*:*&df=text&distrib=false&_stateVer_=reshapeReindexing:8&omitHeader=false&shards.purpose=64&NOW=1567205848467&ids=11,0,12,1,13,14,15,4,8,10&isShard=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard1_replica_n1/|http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 376238 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.S.Request [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/select params={q=*:*&_stateVer_=reshapeReindexing:8&wt=javabin&version=2} hits=200 status=0 QTime=267
   [junit4]   2> 376443 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.S.Request [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/select params={df=text&distrib=false&_stateVer_=reshapeReindexing:8&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/|http://127.0.0.1:37284/solr/reshapeReindexing_shard2_replica_n6/&rows=10&version=2&q=*:*&omitHeader=false&NOW=1567205848762&isShard=true&wt=javabin} hits=97 status=0 QTime=0
   [junit4]   2> 376445 INFO  (qtp267509621-2429) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/select params={df=text&distrib=false&_stateVer_=reshapeReindexing:8&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard1_replica_n1/|http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&rows=10&version=2&q=*:*&omitHeader=false&NOW=1567205848762&isShard=true&wt=javabin} hits=103 status=0 QTime=0
   [junit4]   2> 376520 INFO  (qtp267509621-2426) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/select params={q=*:*&df=text&distrib=false&_stateVer_=reshapeReindexing:8&omitHeader=false&shards.purpose=64&NOW=1567205848762&ids=11,0,12,1,13,14,15,4,8,10&isShard=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard1_replica_n1/|http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&wt=javabin&version=2} status=0 QTime=1
   [junit4]   2> 376528 INFO  (qtp1706701953-2439) [n:127.0.0.1:37284_solr c:reshapeReindexing s:shard1 r:core_node5 x:reshapeReindexing_shard1_replica_n2 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n2]  webapp=/solr path=/select params={q=*:*&_stateVer_=reshapeReindexing:8&wt=javabin&version=2} hits=200 status=0 QTime=262
   [junit4]   2> 376590 INFO  (qtp267509621-2428) [n:127.0.0.1:36512_solr     ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :reindexcollection with params replicationFactor=1&shards=foo,bar,baz&q=id:10*&fl=id,string_s&name=reshapeReindexing&router.name=implicit&action=REINDEXCOLLECTION&numShards=3&wt=javabin&version=2&target=reshapeReindexingTarget and sendToOCPQueue=true
   [junit4]   2> 376692 DEBUG (OverseerThreadFactory-1133-thread-2-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.a.c.ReindexCollectionCmd *** called: {
   [junit4]   2>   "name":"reshapeReindexing",
   [junit4]   2>   "target":"reshapeReindexingTarget",
   [junit4]   2>   "numShards":"3",
   [junit4]   2>   "replicationFactor":"1",
   [junit4]   2>   "shards":"foo,bar,baz",
   [junit4]   2>   "q":"id:10*",
   [junit4]   2>   "fl":"id,string_s",
   [junit4]   2>   "router.name":"implicit",
   [junit4]   2>   "operation":"reindexcollection"}
   [junit4]   2> 377068 INFO  (OverseerThreadFactory-1133-thread-2-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 377384 INFO  (zkConnectionManagerCallback-1064-thread-1) [     ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 377396 INFO  (OverseerThreadFactory-1133-thread-2-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 377494 INFO  (OverseerThreadFactory-1133-thread-2-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 377530 INFO  (OverseerThreadFactory-1133-thread-2-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35572/solr ready
   [junit4]   2> 377562 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.S.Request [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/select params={df=text&distrib=false&_stateVer_=reshapeReindexing:8&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard2_replica_n4/|http://127.0.0.1:37284/solr/reshapeReindexing_shard2_replica_n6/&rows=0&version=2&q=*:*&omitHeader=false&NOW=1567205850055&isShard=true&wt=javabin} hits=97 status=0 QTime=0
   [junit4]   2> 377595 INFO  (qtp267509621-2430) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard1 r:core_node3 x:reshapeReindexing_shard1_replica_n1 ] o.a.s.c.S.Request [reshapeReindexing_shard1_replica_n1]  webapp=/solr path=/select params={df=text&distrib=false&_stateVer_=reshapeReindexing:8&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=http://127.0.0.1:36512/solr/reshapeReindexing_shard1_replica_n1/|http://127.0.0.1:37284/solr/reshapeReindexing_shard1_replica_n2/&rows=0&version=2&q=*:*&omitHeader=false&NOW=1567205850055&isShard=true&wt=javabin} hits=103 status=0 QTime=32
   [junit4]   2> 377596 INFO  (qtp267509621-2427) [n:127.0.0.1:36512_solr c:reshapeReindexing s:shard2 r:core_node7 x:reshapeReindexing_shard2_replica_n4 ] o.a.s.c.S.Request [reshapeReindexing_shard2_replica_n4]  webapp=/solr path=/select params={q=*:*&_stateVer_=reshapeReindexing:8&rows=0&wt=javabin&version=2} hits=200 status=0 QTime=37
   [junit4]   2> 377606 INFO  (OverseerThreadFactory-1133-thread-2-processing-n:127.0.0.1:37284_solr) [n:127.0.0.1:37284_solr     ] o.a.s.c.a.c.CreateCollectionCmd Create collection reshapeReindexingTarget
   [junit4]   2> 377786 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexingTarget",
   [junit4]   2>   "shard":"foo",
   [junit4]   2>   "core":"reshapeReindexingTarget_foo_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:36512/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"true"}
   [junit4]   2> 377807 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexingTarget",
   [junit4]   2>   "shard":"foo",
   [junit4]   2>   "core":"reshapeReindexingTarget_foo_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:37284/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"true"}
   [junit4]   2> 377823 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexingTarget",
   [junit4]   2>   "shard":"bar",
   [junit4]   2>   "core":"reshapeReindexingTarget_bar_replica_n4",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:36512/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"true"}
   [junit4]   2> 377881 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexingTarget",
   [junit4]   2>   "shard":"bar",
   [junit4]   2>   "core":"reshapeReindexingTarget_bar_replica_n7",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:37284/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"true"}
   [junit4]   2> 377882 INFO  (OverseerStateUpdate-75304459696537606-127.0.0.1:37284_solr-n_0000000000) [n:127.0.0.1:37284_solr     ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"reshapeReindexingTarget",
   [junit4]   2>   "shard":"baz",
   [junit4]   2>   "core":"reshapeReindexingTarget_baz_replica_n8",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"http://127.0.0.1:36512/solr",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"true"}

[...truncated too long message...]



-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1260885764
     [copy] Copying 249 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1260885764
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1260885764

check-working-copy:
[ivy:cachepath] :: resolving dependencies :: #;[hidden email]
[ivy:cachepath] confs: [default]
[ivy:cachepath] found org.eclipse.jgit#org.eclipse.jgit;5.3.0.201903130848-r in public
[ivy:cachepath] found com.jcraft#jsch;0.1.54 in public
[ivy:cachepath] found com.jcraft#jzlib;1.1.1 in public
[ivy:cachepath] found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath] found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] found org.bouncycastle#bcpg-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcprov-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcpkix-jdk15on;1.60 in public
[ivy:cachepath] found org.slf4j#slf4j-nop;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 42ms :: artifacts dl 5ms
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   9   |   0   |   0   |   0   ||   9   |   0   |
        ---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] Checking working copy status...

-jenkins-base:

BUILD SUCCESSFUL
Total time: 186 minutes 51 seconds
Archiving artifacts
java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath$ValidateAntFileMask.hasMatch(FilePath.java:2847)
        at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2726)
        at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2707)
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene2
                at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
                at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
                at hudson.remoting.Channel.call(Channel.java:955)
                at hudson.FilePath.act(FilePath.java:1072)
                at hudson.FilePath.act(FilePath.java:1061)
                at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
                at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
                at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
                at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
                at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
                at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
                at hudson.model.Build$BuildExecution.post2(Build.java:186)
                at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
                at hudson.model.Run.execute(Run.java:1835)
                at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
                at hudson.model.ResourceController.execute(ResourceController.java:97)
                at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.FilePath$TunneledInterruptedException
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3088)
        at hudson.remoting.UserRequest.perform(UserRequest.java:212)
        at hudson.remoting.UserRequest.perform(UserRequest.java:54)
        at hudson.remoting.Request$2.run(Request.java:369)
        at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused: java.lang.InterruptedException: java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath.act(FilePath.java:1074)
        at hudson.FilePath.act(FilePath.java:1061)
        at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
        at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
        at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
        at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
        at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
        at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
        at hudson.model.Build$BuildExecution.post2(Build.java:186)
        at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
        at hudson.model.Run.execute(Run.java:1835)
        at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
        at hudson.model.ResourceController.execute(ResourceController.java:97)
        at hudson.model.Executor.run(Executor.java:429)
No artifacts found that match the file pattern "**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]
Reply | Threaded
Open this post in threaded view
|

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 462 - Still Unstable

Apache Jenkins Server-2
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/462/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestConfigSetsAPI.testUserAndTestDefaultConfigsetsAreSame

Error Message:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/configsets/_default/conf/managed-schema contents doesn't match expected (/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/server/solr/configsets/_default/conf/managed-schema) expected:<...         <tokenizer [name="whitespace"/>       </analyzer>     </fieldType>      <!-- A general text field that has reasonable, generic          cross-language defaults: it tokenizes with StandardTokenizer,         removes stop words from case-insensitive "stopwords.txt"         (empty by default), and down cases.  At query time only, it         also applies synonyms.    -->     <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <!-- in this example, we will only use synonyms at query time         <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="flattenGraph"/>         -->         <filter name="lowercase"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="lowercase"/>       </analyzer>     </fieldType>           <!-- SortableTextField generaly functions exactly like TextField,          except that it supports, and by default uses, docValues for sorting (or faceting)          on the first 1024 characters of the original field values (which is configurable).                    This makes it a bit more useful then TextField in many situations, but the trade-off          is that it takes up more space on disk; which is why it's not used in place of TextField          for every fieldType in this _default schema.    -->     <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/>     <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/>     <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="lowercase"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="lowercase"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,          removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and          finally applies Porter's stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->     <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>     <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer name="standard"/>         <!-- in this example, we will only use synonyms at query time         <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="flattenGraph"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"             />         <filter name="lowercase"/>         <filter name="englishPossessive"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter name="englishMinimalStem"/>        -->         <filter name="porterStem"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter name="lowercase"/>         <filter name="englishPossessive"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter name="englishMinimalStem"/>        -->         <filter name="porterStem"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English, plus          aggressive word-splitting and autophrase features enabled.          This field is just like text_en, except it adds          WordDelimiterGraphFilter to enable splitting and matching of          words on case-change, alpha numeric boundaries, and          non-alphanumeric chars.  This means certain compound word          cases will work, for example query "wi fi" will match          document "WiFi" or "wi-fi".     -->     <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer name="whitespace"/>         <!-- in this example, we will only use synonyms at query time         <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="porterStem"/>         <filter name="flattenGraph" />       </analyzer>       <analyzer type="query">         <tokenizer name="whitespace"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="porterStem"/>       </analyzer>     </fieldType>      <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,          but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->     <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer name="whitespace"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="englishMinimalStem"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter name="removeDuplicates"/>         <filter name="flattenGraph" />       </analyzer>       <analyzer type="query">         <tokenizer name="whitespace"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="englishMinimalStem"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter name="removeDuplicates"/>       </analyzer>     </fieldType>      <!-- Just like text_general except it reverses the characters of         each token, to enable more efficient leading wildcard queries.     -->     <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>     <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="lowercase"/>         <filter name="reversedWildcard" withOriginal="true"                 maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="lowercase"/>       </analyzer>     </fieldType>      <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>     <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >       <analyzer>         <tokenizer name="standard"/>         <filter name="doubleMetaphone" inject="false"/>       </analyzer>     </fieldType>      <!-- lowercases the entire field value, keeping it as a single token.  -->     <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>     <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="keyword"/>         <filter name="lowercase" />       </analyzer>     </fieldType>      <!--        Example of using PathHierarchyTokenizerFactory at index time, so       queries for paths match documents at that path, or in descendent paths     -->     <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>     <fieldType name="descendent_path" class="solr.TextField">       <analyzer type="index">         <tokenizer name="pathHierarchy" delimiter="/" />       </analyzer>       <analyzer type="query">         <tokenizer name="keyword" />       </analyzer>     </fieldType>      <!--       Example of using PathHierarchyTokenizerFactory at query time, so       queries for paths match documents at that path, or in ancestor paths     -->     <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>     <fieldType name="ancestor_path" class="solr.TextField">       <analyzer type="index">         <tokenizer name="keyword" />       </analyzer>       <analyzer type="query">         <tokenizer name="pathHierarchy" delimiter="/" />       </analyzer>     </fieldType>      <!-- This point type indexes the coordinates as separate fields (subFields)       If subFieldType is defined, it references a type, and a dynamic field       definition is created matching *___<typename>.  Alternately, if        subFieldSuffix is defined, that is used to create the subFields.       Example: if subFieldType="double", then the coordinates would be         indexed in fields myloc_0___double,myloc_1___double.       Example: if subFieldSuffix="_d" then the coordinates would be indexed         in fields myloc_0_d,myloc_1_d       The subFields are an implementation detail of the fieldType, and end       users normally should not need to know about them.      -->     <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>     <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>      <!-- A specialized field for geospatial search filters and distance sorting. -->     <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>      <!-- A geospatial field type that supports multiValued and polygon shapes.       For more information about this and other spatial fields see:       http://lucene.apache.org/solr/guide/spatial-search.html     -->     <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"                geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />      <!-- Payloaded field types -->     <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer name="whitespace"/>         <filter name="delimitedPayload" encoder="float"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer name="whitespace"/>         <filter name="delimitedPayload" encoder="integer"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer name="whitespace"/>         <filter name="delimitedPayload" encoder="identity"/>       </analyzer>     </fieldType>      <!-- some examples for different languages (generally ordered by ISO code) -->      <!-- Arabic -->     <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>     <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- for any non-arabic -->         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />         <!-- normalizes ﻯ to ﻱ, etc -->         <filter name="arabicNormalization"/>         <filter name="arabicStem"/>       </analyzer>     </fieldType>      <!-- Bulgarian -->     <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>     <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" />         <filter name="bulgarianStem"/>       </analyzer>     </fieldType>          <!-- Catalan -->     <dynamicField name="*_txt_ca" type="text_ca"  indexed="true"  stored="true"/>     <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes l', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_ca.txt"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" />         <filter name="snowballPorter" language="Catalan"/>       </analyzer>     </fieldType>          <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->     <dynamicField name="*_txt_cjk" type="text_cjk"  indexed="true"  stored="true"/>     <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="standard"/>         <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->         <filter name="CJKWidth"/>         <!-- for any non-CJK -->         <filter name="lowercase"/>         <filter name="CJKBigram"/>       </analyzer>     </fieldType>      <!-- Czech -->     <dynamicField name="*_txt_cz" type="text_cz"  indexed="true"  stored="true"/>     <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_cz.txt" />         <filter name="czechStem"/>       </analyzer>     </fieldType>          <!-- Danish -->     <dynamicField name="*_txt_da" type="text_da"  indexed="true"  stored="true"/>     <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />         <filter name="snowballPorter" language="Danish"/>       </analyzer>     </fieldType>          <!-- German -->     <dynamicField name="*_txt_de" type="text_de"  indexed="true"  stored="true"/>     <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />         <filter name="germanNormalization"/>         <filter name="germanLightStem"/>         <!-- less aggressive: <filter name="germanMinimalStem"/> -->         <!-- more aggressive: <filter name="snowballPorter" language="German2"/> -->       </analyzer>     </fieldType>          <!-- Greek -->     <dynamicField name="*_txt_el" type="text_el"  indexed="true"  stored="true"/>     <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- greek specific lowercase for sigma -->         <filter name="greekLowercase"/>         <filter name="stop" ignoreCase="false" words="lang/stopwords_el.txt" />         <filter name="greekStem"/>       </analyzer>     </fieldType>          <!-- Spanish -->     <dynamicField name="*_txt_es" type="text_es"  indexed="true"  stored="true"/>     <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />         <filter name="spanishLightStem"/>         <!-- more aggressive: <filter name="snowballPorter" language="Spanish"/> -->       </analyzer>     </fieldType>      <!-- Estonian -->     <dynamicField name="*_txt_et" type="text_et"  indexed="true"  stored="true"/>     <fieldType name="text_et" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_et.txt" />         <filter name="snowballPorter" language="Estonian"/>       </analyzer>     </fieldType>      <!-- Basque -->     <dynamicField name="*_txt_eu" type="text_eu"  indexed="true"  stored="true"/>     <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_eu.txt" />         <filter name="snowballPorter" language="Basque"/>       </analyzer>     </fieldType>          <!-- Persian -->     <dynamicField name="*_txt_fa" type="text_fa"  indexed="true"  stored="true"/>     <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- for ZWNJ -->         <charFilter name="persian"/>         <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="arabicNormalization"/>         <filter name="persianNormalization"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_fa.txt" />       </analyzer>     </fieldType>          <!-- Finnish -->     <dynamicField name="*_txt_fi" type="text_fi"  indexed="true"  stored="true"/>     <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />         <filter name="snowballPorter" language="Finnish"/>         <!-- less aggressive: <filter name="finnishLightStem"/> -->       </analyzer>     </fieldType>          <!-- French -->     <dynamicField name="*_txt_fr" type="text_fr"  indexed="true"  stored="true"/>     <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes l', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_fr.txt"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />         <filter name="frenchLightStem"/>         <!-- less aggressive: <filter name="frenchMinimalStem"/> -->         <!-- more aggressive: <filter name="snowballPorter" language="French"/> -->       </analyzer>     </fieldType>          <!-- Irish -->     <dynamicField name="*_txt_ga" type="text_ga"  indexed="true"  stored="true"/>     <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes d', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_ga.txt"/>         <!-- removes n-, etc. position increments is intentionally false! -->         <filter name="stop" ignoreCase="true" words="lang/hyphenations_ga.txt"/>         <filter name="irishLowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ga.txt"/>         <filter name="snowballPorter" language="Irish"/>       </analyzer>     </fieldType>          <!-- Galician -->     <dynamicField name="*_txt_gl" type="text_gl"  indexed="true"  stored="true"/>     <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_gl.txt" />         <filter name="galicianStem"/>         <!-- less aggressive: <filter name="galicianMinimalStem"/> -->       </analyzer>     </fieldType>          <!-- Hindi -->     <dynamicField name="*_txt_hi" type="text_hi"  indexed="true"  stored="true"/>     <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <!-- normalizes unicode representation -->         <filter name="indicNormalization"/>         <!-- normalizes variation in spelling -->         <filter name="hindiNormalization"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_hi.txt" />         <filter name="hindiStem"/>       </analyzer>     </fieldType>          <!-- Hungarian -->     <dynamicField name="*_txt_hu" type="text_hu"  indexed="true"  stored="true"/>     <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />         <filter name="snowballPorter" language="Hungarian"/>         <!-- less aggressive: <filter name="hungarianLightStem"/> -->       </analyzer>     </fieldType>          <!-- Armenian -->     <dynamicField name="*_txt_hy" type="text_hy"  indexed="true"  stored="true"/>     <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_hy.txt" />         <filter name="snowballPorter" language="Armenian"/>       </analyzer>     </fieldType>          <!-- Indonesian -->     <dynamicField name="*_txt_id" type="text_id"  indexed="true"  stored="true"/>     <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_id.txt" />         <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->         <filter name="indonesianStem" stemDerivational="true"/>       </analyzer>     </fieldType>          <!-- Italian -->   <dynamicField name="*_txt_it" type="text_it"  indexed="true"  stored="true"/>   <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes l', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_it.txt"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />         <filter name="italianLightStem"/>         <!-- more aggressive: <filter name="snowballPorter" language="Italian"/> -->       </analyzer>     </fieldType>          <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)           NOTE: If you want to optimize search for precision, use default operator AND in your request          handler config (q.op) Use OR if you would like to optimize for recall (default).     -->     <dynamicField name="*_txt_ja" type="text_ja"  indexed="true"  stored="true"/>     <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">       <analyzer>         <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)             Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic            is used to segment compounds into its parts and the compound itself is kept as synonym.             Valid values for attribute mode are:               normal: regular segmentation               search: segmentation useful for search with synonyms compounds (default)             extended: same as search mode, but unigrams unknown words (experimental)             For some applications it might be good to use search mode for indexing and normal mode for            queries to reduce recall and prevent parts of compounds from being matched and highlighted.            Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.             Kuromoji also has a convenient user dictionary feature that allows overriding the statistical            model with your own entries for segmentation, part-of-speech tags and readings without a need            to specify weights.  Notice that user dictionaries have not been subject to extensive testing.             User dictionary attributes are:                      userDictionary: user dictionary filename              userDictionaryEncoding: user dictionary encoding (default is UTF-8)             See lang/userdict_ja.txt for a sample user dictionary file.             Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.         -->         <tokenizer name="japanese" mode="search"/>         <!--<tokenizer name="japanese" mode="search" userDictionary="lang/userdict_ja.txt"/>-->         <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->         <filter name="japaneseBaseForm"/>         <!-- Removes tokens with certain part-of-speech tags -->         <filter name="japanesePartOfSpeechStop" tags="lang/stoptags_ja.txt" />         <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->         <filter name="cjkWidth"/>         <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->         <filter name="stop" ignoreCase="true" words="lang/stopwords_ja.txt" />         <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->         <filter name="japaneseKatakanaStem" minimumLength="4"/>         <!-- Lower-cases romaji characters -->         <filter name="lowercase"/>       </analyzer>     </fieldType>          <!-- Korean morphological analysis -->     <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>     <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)           The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.           It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.            This dictionary was built with MeCab, it defines a format for the features adapted           for the Korean language.                      Nori also has a convenient user dictionary feature that allows overriding the statistical           model with your own entries for segmentation, part-of-speech tags and readings without a need           to specify weights. Notice that user dictionaries have not been subject to extensive testing.            The tokenizer supports multiple schema attributes:             * userDictionary: User dictionary path.             * userDictionaryEncoding: User dictionary encoding.             * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.             * outputUnknownUnigrams: If true outputs unigrams for unknown words.         -->         <tokenizer name="korean" decompoundMode="discard" outputUnknownUnigrams="false"/>         <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',           listing the tags to remove. By default it removes:            E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV           This is basically an equivalent to stemming.         -->         <filter name="koreanPartOfSpeechStop" />         <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->         <filter name="koreanReadingForm" />         <filter name="lowercase" />       </analyzer>     </fieldType>      <!-- Latvian -->     <dynamicField name="*_txt_lv" type="text_lv"  indexed="true"  stored="true"/>     <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_lv.txt" />         <filter name="latvianStem"/>       </analyzer>     </fieldType>          <!-- Dutch -->     <dynamicField name="*_txt_nl" type="text_nl"  indexed="true"  stored="true"/>     <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />         <filter name="stemmerOverride" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>         <filter name="snowballPorter" language="Dutch"/>       </analyzer>     </fieldType>          <!-- Norwegian -->     <dynamicField name="*_txt_no" type="text_no"  indexed="true"  stored="true"/>     <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />         <filter name="snowballPorter" language="Norwegian"/>         <!-- less aggressive: <filter name="norwegianLightStem"/> -->         <!-- singular/plural: <filter name="norwegianMinimalStem"/> -->       </analyzer>     </fieldType>          <!-- Portuguese -->   <dynamicField name="*_txt_pt" type="text_pt"  indexed="true"  stored="true"/>   <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />         <filter name="portugueseLightStem"/>         <!-- less aggressive: <filter name="portugueseMinimalStem"/> -->         <!-- more aggressive: <filter name="snowballPorter" language="Portuguese"/> -->         <!-- most aggressive: <filter name="portugueseStem"/> -->       </analyzer>     </fieldType>          <!-- Romanian -->     <dynamicField name="*_txt_ro" type="text_ro"  indexed="true"  stored="true"/>     <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ro.txt" />         <filter name="snowballPorter" language="Romanian"/>       </analyzer>     </fieldType>          <!-- Russian -->     <dynamicField name="*_txt_ru" type="text_ru"  indexed="true"  stored="true"/>     <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />         <filter name="snowballPorter" language="Russian"/>         <!-- less aggressive: <filter name="russianLightStem"/> -->       </analyzer>     </fieldType>          <!-- Swedish -->     <dynamicField name="*_txt_sv" type="text_sv"  indexed="true"  stored="true"/>     <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />         <filter name="snowballPorter" language="Swedish"/>         <!-- less aggressive: <filter name="swedishLightStem"/> -->       </analyzer>     </fieldType>          <!-- Thai -->     <dynamicField name="*_txt_th" type="text_th"  indexed="true"  stored="true"/>     <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="thai"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_th.txt" />       </analyzer>     </fieldType>          <!-- Turkish -->     <dynamicField name="*_txt_tr" type="text_tr"  indexed="true"  stored="true"/>     <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="turkishLowercase"/>         <filter name="stop" ignoreCase="false" words="lang/stopwords_tr.txt" />         <filter name="snowballPorter]" language="Turkish"...> but was:<...         <tokenizer [class="solr.WhitespaceTokenizerFactory"/>       </analyzer>     </fieldType>      <!-- A general text field that has reasonable, generic          cross-language defaults: it tokenizes with StandardTokenizer,         removes stop words from case-insensitive "stopwords.txt"         (empty by default), and down cases.  At query time only, it         also applies synonyms.    -->     <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <!-- in this example, we will only use synonyms at query time         <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.FlattenGraphFilterFactory"/>         -->         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>           <!-- SortableTextField generaly functions exactly like TextField,          except that it supports, and by default uses, docValues for sorting (or faceting)          on the first 1024 characters of the original field values (which is configurable).                    This makes it a bit more useful then TextField in many situations, but the trade-off          is that it takes up more space on disk; which is why it's not used in place of TextField          for every fieldType in this _default schema.    -->     <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/>     <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/>     <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,          removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and          finally applies Porter's stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->     <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>     <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- in this example, we will only use synonyms at query time         <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.FlattenGraphFilterFactory"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"             />         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.EnglishPossessiveFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter class="solr.EnglishMinimalStemFilterFactory"/>        -->         <filter class="solr.PorterStemFilterFactory"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.EnglishPossessiveFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter class="solr.EnglishMinimalStemFilterFactory"/>        -->         <filter class="solr.PorterStemFilterFactory"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English, plus          aggressive word-splitting and autophrase features enabled.          This field is just like text_en, except it adds          WordDelimiterGraphFilter to enable splitting and matching of          words on case-change, alpha numeric boundaries, and          non-alphanumeric chars.  This means certain compound word          cases will work, for example query "wi fi" will match          document "WiFi" or "wi-fi".     -->     <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <!-- in this example, we will only use synonyms at query time         <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.PorterStemFilterFactory"/>         <filter class="solr.FlattenGraphFilterFactory" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.PorterStemFilterFactory"/>       </analyzer>     </fieldType>      <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,          but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->     <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.EnglishMinimalStemFilterFactory"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>         <filter class="solr.FlattenGraphFilterFactory" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.EnglishMinimalStemFilterFactory"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>       </analyzer>     </fieldType>      <!-- Just like text_general except it reverses the characters of         each token, to enable more efficient leading wildcard queries.     -->     <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>     <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"                 maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>      <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>     <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >       <analyzer>         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>       </analyzer>     </fieldType>      <!-- lowercases the entire field value, keeping it as a single token.  -->     <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>     <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.KeywordTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory" />       </analyzer>     </fieldType>      <!--        Example of using PathHierarchyTokenizerFactory at index time, so       queries for paths match documents at that path, or in descendent paths     -->     <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>     <fieldType name="descendent_path" class="solr.TextField">       <analyzer type="index">         <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.KeywordTokenizerFactory" />       </analyzer>     </fieldType>      <!--       Example of using PathHierarchyTokenizerFactory at query time, so       queries for paths match documents at that path, or in ancestor paths     -->     <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>     <fieldType name="ancestor_path" class="solr.TextField">       <analyzer type="index">         <tokenizer class="solr.KeywordTokenizerFactory" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />       </analyzer>     </fieldType>      <!-- This point type indexes the coordinates as separate fields (subFields)       If subFieldType is defined, it references a type, and a dynamic field       definition is created matching *___<typename>.  Alternately, if        subFieldSuffix is defined, that is used to create the subFields.       Example: if subFieldType="double", then the coordinates would be         indexed in fields myloc_0___double,myloc_1___double.       Example: if subFieldSuffix="_d" then the coordinates would be indexed         in fields myloc_0_d,myloc_1_d       The subFields are an implementation detail of the fieldType, and end       users normally should not need to know about them.      -->     <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>     <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>      <!-- A specialized field for geospatial search filters and distance sorting. -->     <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>      <!-- A geospatial field type that supports multiValued and polygon shapes.       For more information about this and other spatial fields see:       http://lucene.apache.org/solr/guide/spatial-search.html     -->     <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"                geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />      <!-- Payloaded field types -->     <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="identity"/>       </analyzer>     </fieldType>      <!-- some examples for different languages (generally ordered by ISO code) -->      <!-- Arabic -->     <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>     <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- for any non-arabic -->         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ar.txt" />         <!-- normalizes ﻯ to ﻱ, etc -->         <filter class="solr.ArabicNormalizationFilterFactory"/>         <filter class="solr.ArabicStemFilterFactory"/>       </analyzer>     </fieldType>      <!-- Bulgarian -->     <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>     <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>          <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_bg.txt" />          <filter class="solr.BulgarianStemFilterFactory"/>              </analyzer>     </fieldType>          <!-- Catalan -->     <dynamicField name="*_txt_ca" type="text_ca"  indexed="true"  stored="true"/>     <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes l', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ca.txt"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ca.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Catalan"/>              </analyzer>     </fieldType>          <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->     <dynamicField name="*_txt_cjk" type="text_cjk"  indexed="true"  stored="true"/>     <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->         <filter class="solr.CJKWidthFilterFactory"/>         <!-- for any non-CJK -->         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.CJKBigramFilterFactory"/>       </analyzer>     </fieldType>      <!-- Czech -->     <dynamicField name="*_txt_cz" type="text_cz"  indexed="true"  stored="true"/>     <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_cz.txt" />         <filter class="solr.CzechStemFilterFactory"/>              </analyzer>     </fieldType>          <!-- Danish -->     <dynamicField name="*_txt_da" type="text_da"  indexed="true"  stored="true"/>     <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Danish"/>              </analyzer>     </fieldType>          <!-- German -->     <dynamicField name="*_txt_de" type="text_de"  indexed="true"  stored="true"/>     <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />         <filter class="solr.GermanNormalizationFilterFactory"/>         <filter class="solr.GermanLightStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.GermanMinimalStemFilterFactory"/> -->         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="German2"/> -->       </analyzer>     </fieldType>          <!-- Greek -->     <dynamicField name="*_txt_el" type="text_el"  indexed="true"  stored="true"/>     <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- greek specific lowercase for sigma -->         <filter class="solr.GreekLowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_el.txt" />         <filter class="solr.GreekStemFilterFactory"/>       </analyzer>     </fieldType>          <!-- Spanish -->     <dynamicField name="*_txt_es" type="text_es"  indexed="true"  stored="true"/>     <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />         <filter class="solr.SpanishLightStemFilterFactory"/>         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Spanish"/> -->       </analyzer>     </fieldType>      <!-- Estonian -->     <dynamicField name="*_txt_et" type="text_et"  indexed="true"  stored="true"/>     <fieldType name="text_et" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_et.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Estonian"/>       </analyzer>     </fieldType>      <!-- Basque -->     <dynamicField name="*_txt_eu" type="text_eu"  indexed="true"  stored="true"/>     <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_eu.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Basque"/>       </analyzer>     </fieldType>          <!-- Persian -->     <dynamicField name="*_txt_fa" type="text_fa"  indexed="true"  stored="true"/>     <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- for ZWNJ -->         <charFilter class="solr.PersianCharFilterFactory"/>         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.ArabicNormalizationFilterFactory"/>         <filter class="solr.PersianNormalizationFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fa.txt" />       </analyzer>     </fieldType>          <!-- Finnish -->     <dynamicField name="*_txt_fi" type="text_fi"  indexed="true"  stored="true"/>     <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Finnish"/>         <!-- less aggressive: <filter class="solr.FinnishLightStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- French -->     <dynamicField name="*_txt_fr" type="text_fr"  indexed="true"  stored="true"/>     <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes l', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_fr.txt"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />         <filter class="solr.FrenchLightStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.FrenchMinimalStemFilterFactory"/> -->         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="French"/> -->       </analyzer>     </fieldType>          <!-- Irish -->     <dynamicField name="*_txt_ga" type="text_ga"  indexed="true"  stored="true"/>     <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes d', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ga.txt"/>         <!-- removes n-, etc. position increments is intentionally false! -->         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/hyphenations_ga.txt"/>         <filter class="solr.IrishLowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ga.txt"/>         <filter class="solr.SnowballPorterFilterFactory" language="Irish"/>       </analyzer>     </fieldType>          <!-- Galician -->     <dynamicField name="*_txt_gl" type="text_gl"  indexed="true"  stored="true"/>     <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_gl.txt" />         <filter class="solr.GalicianStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.GalicianMinimalStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Hindi -->     <dynamicField name="*_txt_hi" type="text_hi"  indexed="true"  stored="true"/>     <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <!-- normalizes unicode representation -->         <filter class="solr.IndicNormalizationFilterFactory"/>         <!-- normalizes variation in spelling -->         <filter class="solr.HindiNormalizationFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hi.txt" />         <filter class="solr.HindiStemFilterFactory"/>       </analyzer>     </fieldType>          <!-- Hungarian -->     <dynamicField name="*_txt_hu" type="text_hu"  indexed="true"  stored="true"/>     <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Hungarian"/>         <!-- less aggressive: <filter class="solr.HungarianLightStemFilterFactory"/> -->          </analyzer>     </fieldType>          <!-- Armenian -->     <dynamicField name="*_txt_hy" type="text_hy"  indexed="true"  stored="true"/>     <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Armenian"/>       </analyzer>     </fieldType>          <!-- Indonesian -->     <dynamicField name="*_txt_id" type="text_id"  indexed="true"  stored="true"/>     <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_id.txt" />         <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->         <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true"/>       </analyzer>     </fieldType>          <!-- Italian -->   <dynamicField name="*_txt_it" type="text_it"  indexed="true"  stored="true"/>   <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes l', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_it.txt"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />         <filter class="solr.ItalianLightStemFilterFactory"/>         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Italian"/> -->       </analyzer>     </fieldType>          <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)           NOTE: If you want to optimize search for precision, use default operator AND in your request          handler config (q.op) Use OR if you would like to optimize for recall (default).     -->     <dynamicField name="*_txt_ja" type="text_ja"  indexed="true"  stored="true"/>     <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">       <analyzer>         <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)             Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic            is used to segment compounds into its parts and the compound itself is kept as synonym.             Valid values for attribute mode are:               normal: regular segmentation               search: segmentation useful for search with synonyms compounds (default)             extended: same as search mode, but unigrams unknown words (experimental)             For some applications it might be good to use search mode for indexing and normal mode for            queries to reduce recall and prevent parts of compounds from being matched and highlighted.            Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.             Kuromoji also has a convenient user dictionary feature that allows overriding the statistical            model with your own entries for segmentation, part-of-speech tags and readings without a need            to specify weights.  Notice that user dictionaries have not been subject to extensive testing.             User dictionary attributes are:                      userDictionary: user dictionary filename              userDictionaryEncoding: user dictionary encoding (default is UTF-8)             See lang/userdict_ja.txt for a sample user dictionary file.             Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.         -->         <tokenizer class="solr.JapaneseTokenizerFactory" mode="search"/>         <!--<tokenizer class="solr.JapaneseTokenizerFactory" mode="search" userDictionary="lang/userdict_ja.txt"/>-->         <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->         <filter class="solr.JapaneseBaseFormFilterFactory"/>         <!-- Removes tokens with certain part-of-speech tags -->         <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt" />         <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->         <filter class="solr.CJKWidthFilterFactory"/>         <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt" />         <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->         <filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/>         <!-- Lower-cases romaji characters -->         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>          <!-- Korean morphological analysis -->     <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>     <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)           The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.           It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.            This dictionary was built with MeCab, it defines a format for the features adapted           for the Korean language.                      Nori also has a convenient user dictionary feature that allows overriding the statistical           model with your own entries for segmentation, part-of-speech tags and readings without a need           to specify weights. Notice that user dictionaries have not been subject to extensive testing.            The tokenizer supports multiple schema attributes:             * userDictionary: User dictionary path.             * userDictionaryEncoding: User dictionary encoding.             * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.             * outputUnknownUnigrams: If true outputs unigrams for unknown words.         -->         <tokenizer class="solr.KoreanTokenizerFactory" decompoundMode="discard" outputUnknownUnigrams="false"/>         <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',           listing the tags to remove. By default it removes:            E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV           This is basically an equivalent to stemming.         -->         <filter class="solr.KoreanPartOfSpeechStopFilterFactory" />         <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->         <filter class="solr.KoreanReadingFormFilterFactory" />         <filter class="solr.LowerCaseFilterFactory" />       </analyzer>     </fieldType>      <!-- Latvian -->     <dynamicField name="*_txt_lv" type="text_lv"  indexed="true"  stored="true"/>     <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_lv.txt" />         <filter class="solr.LatvianStemFilterFactory"/>       </analyzer>     </fieldType>          <!-- Dutch -->     <dynamicField name="*_txt_nl" type="text_nl"  indexed="true"  stored="true"/>     <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />         <filter class="solr.StemmerOverrideFilterFactory" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>         <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/>       </analyzer>     </fieldType>          <!-- Norwegian -->     <dynamicField name="*_txt_no" type="text_no"  indexed="true"  stored="true"/>     <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Norwegian"/>         <!-- less aggressive: <filter class="solr.NorwegianLightStemFilterFactory"/> -->         <!-- singular/plural: <filter class="solr.NorwegianMinimalStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Portuguese -->   <dynamicField name="*_txt_pt" type="text_pt"  indexed="true"  stored="true"/>   <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />         <filter class="solr.PortugueseLightStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.PortugueseMinimalStemFilterFactory"/> -->         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Portuguese"/> -->         <!-- most aggressive: <filter class="solr.PortugueseStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Romanian -->     <dynamicField name="*_txt_ro" type="text_ro"  indexed="true"  stored="true"/>     <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ro.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Romanian"/>       </analyzer>     </fieldType>          <!-- Russian -->     <dynamicField name="*_txt_ru" type="text_ru"  indexed="true"  stored="true"/>     <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Russian"/>         <!-- less aggressive: <filter class="solr.RussianLightStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Swedish -->     <dynamicField name="*_txt_sv" type="text_sv"  indexed="true"  stored="true"/>     <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Swedish"/>         <!-- less aggressive: <filter class="solr.SwedishLightStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Thai -->     <dynamicField name="*_txt_th" type="text_th"  indexed="true"  stored="true"/>     <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.ThaiTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_th.txt" />       </analyzer>     </fieldType>          <!-- Turkish -->     <dynamicField name="*_txt_tr" type="text_tr"  indexed="true"  stored="true"/>     <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.TurkishLowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_tr.txt" />         <filter class="solr.SnowballPorterFilterFactory]" language="Turkish"...>

Stack Trace:
org.junit.ComparisonFailure: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/configsets/_default/conf/managed-schema contents doesn't match expected (/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/server/solr/configsets/_default/conf/managed-schema) expected:<...
        <tokenizer [name="whitespace"/>
      </analyzer>
    </fieldType>

    <!-- A general text field that has reasonable, generic
         cross-language defaults: it tokenizes with StandardTokenizer,
               removes stop words from case-insensitive "stopwords.txt"
               (empty by default), and down cases.  At query time only, it
               also applies synonyms.
          -->
    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <!-- in this example, we will only use synonyms at query time
        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="flattenGraph"/>
        -->
        <filter name="lowercase"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="lowercase"/>
      </analyzer>
    </fieldType>

   
    <!-- SortableTextField generaly functions exactly like TextField,
         except that it supports, and by default uses, docValues for sorting (or faceting)
         on the first 1024 characters of the original field values (which is configurable).
         
         This makes it a bit more useful then TextField in many situations, but the trade-off
         is that it takes up more space on disk; which is why it's not used in place of TextField
         for every fieldType in this _default schema.
          -->
    <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/>
    <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/>
    <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="lowercase"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="lowercase"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,
         removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
         finally applies Porter's stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->
    <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>
    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <!-- in this example, we will only use synonyms at query time
        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="flattenGraph"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
            />
        <filter name="lowercase"/>
        <filter name="englishPossessive"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
        <filter name="englishMinimalStem"/>
              -->
        <filter name="porterStem"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
        />
        <filter name="lowercase"/>
        <filter name="englishPossessive"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
        <filter name="englishMinimalStem"/>
              -->
        <filter name="porterStem"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English, plus
         aggressive word-splitting and autophrase features enabled.
         This field is just like text_en, except it adds
         WordDelimiterGraphFilter to enable splitting and matching of
         words on case-change, alpha numeric boundaries, and
         non-alphanumeric chars.  This means certain compound word
         cases will work, for example query "wi fi" will match
         document "WiFi" or "wi-fi".
    -->
    <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>
    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
      <analyzer type="index">
        <tokenizer name="whitespace"/>
        <!-- in this example, we will only use synonyms at query time
        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
        />
        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="porterStem"/>
        <filter name="flattenGraph" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="whitespace"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
        />
        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="porterStem"/>
      </analyzer>
    </fieldType>

    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
    <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>
    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
      <analyzer type="index">
        <tokenizer name="whitespace"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="englishMinimalStem"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
        <filter name="removeDuplicates"/>
        <filter name="flattenGraph" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="whitespace"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="englishMinimalStem"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
        <filter name="removeDuplicates"/>
      </analyzer>
    </fieldType>

    <!-- Just like text_general except it reverses the characters of
               each token, to enable more efficient leading wildcard queries.
    -->
    <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>
    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="lowercase"/>
        <filter name="reversedWildcard" withOriginal="true"
                maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="lowercase"/>
      </analyzer>
    </fieldType>

    <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>
    <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >
      <analyzer>
        <tokenizer name="standard"/>
        <filter name="doubleMetaphone" inject="false"/>
      </analyzer>
    </fieldType>

    <!-- lowercases the entire field value, keeping it as a single token.  -->
    <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>
    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer name="keyword"/>
        <filter name="lowercase" />
      </analyzer>
    </fieldType>

    <!--
      Example of using PathHierarchyTokenizerFactory at index time, so
      queries for paths match documents at that path, or in descendent paths
    -->
    <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>
    <fieldType name="descendent_path" class="solr.TextField">
      <analyzer type="index">
        <tokenizer name="pathHierarchy" delimiter="/" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="keyword" />
      </analyzer>
    </fieldType>

    <!--
      Example of using PathHierarchyTokenizerFactory at query time, so
      queries for paths match documents at that path, or in ancestor paths
    -->
    <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>
    <fieldType name="ancestor_path" class="solr.TextField">
      <analyzer type="index">
        <tokenizer name="keyword" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="pathHierarchy" delimiter="/" />
      </analyzer>
    </fieldType>

    <!-- This point type indexes the coordinates as separate fields (subFields)
      If subFieldType is defined, it references a type, and a dynamic field
      definition is created matching *___<typename>.  Alternately, if
      subFieldSuffix is defined, that is used to create the subFields.
      Example: if subFieldType="double", then the coordinates would be
        indexed in fields myloc_0___double,myloc_1___double.
      Example: if subFieldSuffix="_d" then the coordinates would be indexed
        in fields myloc_0_d,myloc_1_d
      The subFields are an implementation detail of the fieldType, and end
      users normally should not need to know about them.
     -->
    <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>
    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>

    <!-- A specialized field for geospatial search filters and distance sorting. -->
    <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>

    <!-- A geospatial field type that supports multiValued and polygon shapes.
      For more information about this and other spatial fields see:
      http://lucene.apache.org/solr/guide/spatial-search.html
    -->
    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
               geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />

    <!-- Payloaded field types -->
    <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">
      <analyzer>
        <tokenizer name="whitespace"/>
        <filter name="delimitedPayload" encoder="float"/>
      </analyzer>
    </fieldType>
    <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">
      <analyzer>
        <tokenizer name="whitespace"/>
        <filter name="delimitedPayload" encoder="integer"/>
      </analyzer>
    </fieldType>
    <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">
      <analyzer>
        <tokenizer name="whitespace"/>
        <filter name="delimitedPayload" encoder="identity"/>
      </analyzer>
    </fieldType>

    <!-- some examples for different languages (generally ordered by ISO code) -->

    <!-- Arabic -->
    <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>
    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer name="standard"/>
        <!-- for any non-arabic -->
        <filter name="lowercase"/>
        <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />
        <!-- normalizes ﻯ to ﻱ, etc -->
        <filter name="arabicNormalization"/>
        <filter name="arabicStem"/>
      </analyzer>
    </fieldType>

    <!-- Bulgarian -->
    <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>
    <fieldType name="text_bg" class="solr.TextField" positionIncrement

[...truncated too long message...]


-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1690707918
     [copy] Copying 249 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1690707918
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1690707918

check-working-copy:
[ivy:cachepath] :: resolving dependencies :: #;[hidden email]
[ivy:cachepath] confs: [default]
[ivy:cachepath] found org.eclipse.jgit#org.eclipse.jgit;5.3.0.201903130848-r in public
[ivy:cachepath] found com.jcraft#jsch;0.1.54 in public
[ivy:cachepath] found com.jcraft#jzlib;1.1.1 in public
[ivy:cachepath] found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath] found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] found org.bouncycastle#bcpg-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcprov-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcpkix-jdk15on;1.60 in public
[ivy:cachepath] found org.slf4j#slf4j-nop;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 77ms :: artifacts dl 13ms
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   9   |   0   |   0   |   0   ||   9   |   0   |
        ---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] Checking working copy status...

-jenkins-base:

BUILD SUCCESSFUL
Total time: 186 minutes 35 seconds
Archiving artifacts
java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath$ValidateAntFileMask.hasMatch(FilePath.java:2847)
        at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2726)
        at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2707)
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene2
                at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
                at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
                at hudson.remoting.Channel.call(Channel.java:955)
                at hudson.FilePath.act(FilePath.java:1072)
                at hudson.FilePath.act(FilePath.java:1061)
                at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
                at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
                at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
                at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
                at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
                at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
                at hudson.model.Build$BuildExecution.post2(Build.java:186)
                at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
                at hudson.model.Run.execute(Run.java:1835)
                at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
                at hudson.model.ResourceController.execute(ResourceController.java:97)
                at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.FilePath$TunneledInterruptedException
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3088)
        at hudson.remoting.UserRequest.perform(UserRequest.java:212)
        at hudson.remoting.UserRequest.perform(UserRequest.java:54)
        at hudson.remoting.Request$2.run(Request.java:369)
        at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused: java.lang.InterruptedException: java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath.act(FilePath.java:1074)
        at hudson.FilePath.act(FilePath.java:1061)
        at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
        at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
        at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
        at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
        at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
        at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
        at hudson.model.Build$BuildExecution.post2(Build.java:186)
        at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
        at hudson.model.Run.execute(Run.java:1835)
        at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
        at hudson.model.ResourceController.execute(ResourceController.java:97)
        at hudson.model.Executor.run(Executor.java:429)
No artifacts found that match the file pattern "**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]
Reply | Threaded
Open this post in threaded view
|

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 463 - Still Unstable

Apache Jenkins Server-2
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/463/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestConfigSetsAPI.testUserAndTestDefaultConfigsetsAreSame

Error Message:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/configsets/_default/conf/managed-schema contents doesn't match expected (/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/server/solr/configsets/_default/conf/managed-schema) expected:<...         <tokenizer [name="whitespace"/>       </analyzer>     </fieldType>      <!-- A general text field that has reasonable, generic          cross-language defaults: it tokenizes with StandardTokenizer,         removes stop words from case-insensitive "stopwords.txt"         (empty by default), and down cases.  At query time only, it         also applies synonyms.    -->     <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <!-- in this example, we will only use synonyms at query time         <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="flattenGraph"/>         -->         <filter name="lowercase"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="lowercase"/>       </analyzer>     </fieldType>           <!-- SortableTextField generaly functions exactly like TextField,          except that it supports, and by default uses, docValues for sorting (or faceting)          on the first 1024 characters of the original field values (which is configurable).                    This makes it a bit more useful then TextField in many situations, but the trade-off          is that it takes up more space on disk; which is why it's not used in place of TextField          for every fieldType in this _default schema.    -->     <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/>     <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/>     <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="lowercase"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="lowercase"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,          removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and          finally applies Porter's stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->     <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>     <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer name="standard"/>         <!-- in this example, we will only use synonyms at query time         <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="flattenGraph"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"             />         <filter name="lowercase"/>         <filter name="englishPossessive"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter name="englishMinimalStem"/>        -->         <filter name="porterStem"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter name="lowercase"/>         <filter name="englishPossessive"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter name="englishMinimalStem"/>        -->         <filter name="porterStem"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English, plus          aggressive word-splitting and autophrase features enabled.          This field is just like text_en, except it adds          WordDelimiterGraphFilter to enable splitting and matching of          words on case-change, alpha numeric boundaries, and          non-alphanumeric chars.  This means certain compound word          cases will work, for example query "wi fi" will match          document "WiFi" or "wi-fi".     -->     <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer name="whitespace"/>         <!-- in this example, we will only use synonyms at query time         <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="porterStem"/>         <filter name="flattenGraph" />       </analyzer>       <analyzer type="query">         <tokenizer name="whitespace"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="stop"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="porterStem"/>       </analyzer>     </fieldType>      <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,          but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->     <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer name="whitespace"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="englishMinimalStem"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter name="removeDuplicates"/>         <filter name="flattenGraph" />       </analyzer>       <analyzer type="query">         <tokenizer name="whitespace"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter name="lowercase"/>         <filter name="keywordMarker" protected="protwords.txt"/>         <filter name="englishMinimalStem"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter name="removeDuplicates"/>       </analyzer>     </fieldType>      <!-- Just like text_general except it reverses the characters of         each token, to enable more efficient leading wildcard queries.     -->     <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>     <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer name="standard"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="lowercase"/>         <filter name="reversedWildcard" withOriginal="true"                 maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>       </analyzer>       <analyzer type="query">         <tokenizer name="standard"/>         <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter name="stop" ignoreCase="true" words="stopwords.txt" />         <filter name="lowercase"/>       </analyzer>     </fieldType>      <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>     <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >       <analyzer>         <tokenizer name="standard"/>         <filter name="doubleMetaphone" inject="false"/>       </analyzer>     </fieldType>      <!-- lowercases the entire field value, keeping it as a single token.  -->     <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>     <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="keyword"/>         <filter name="lowercase" />       </analyzer>     </fieldType>      <!--        Example of using PathHierarchyTokenizerFactory at index time, so       queries for paths match documents at that path, or in descendent paths     -->     <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>     <fieldType name="descendent_path" class="solr.TextField">       <analyzer type="index">         <tokenizer name="pathHierarchy" delimiter="/" />       </analyzer>       <analyzer type="query">         <tokenizer name="keyword" />       </analyzer>     </fieldType>      <!--       Example of using PathHierarchyTokenizerFactory at query time, so       queries for paths match documents at that path, or in ancestor paths     -->     <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>     <fieldType name="ancestor_path" class="solr.TextField">       <analyzer type="index">         <tokenizer name="keyword" />       </analyzer>       <analyzer type="query">         <tokenizer name="pathHierarchy" delimiter="/" />       </analyzer>     </fieldType>      <!-- This point type indexes the coordinates as separate fields (subFields)       If subFieldType is defined, it references a type, and a dynamic field       definition is created matching *___<typename>.  Alternately, if        subFieldSuffix is defined, that is used to create the subFields.       Example: if subFieldType="double", then the coordinates would be         indexed in fields myloc_0___double,myloc_1___double.       Example: if subFieldSuffix="_d" then the coordinates would be indexed         in fields myloc_0_d,myloc_1_d       The subFields are an implementation detail of the fieldType, and end       users normally should not need to know about them.      -->     <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>     <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>      <!-- A specialized field for geospatial search filters and distance sorting. -->     <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>      <!-- A geospatial field type that supports multiValued and polygon shapes.       For more information about this and other spatial fields see:       http://lucene.apache.org/solr/guide/spatial-search.html     -->     <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"                geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />      <!-- Payloaded field types -->     <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer name="whitespace"/>         <filter name="delimitedPayload" encoder="float"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer name="whitespace"/>         <filter name="delimitedPayload" encoder="integer"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer name="whitespace"/>         <filter name="delimitedPayload" encoder="identity"/>       </analyzer>     </fieldType>      <!-- some examples for different languages (generally ordered by ISO code) -->      <!-- Arabic -->     <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>     <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- for any non-arabic -->         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />         <!-- normalizes ﻯ to ﻱ, etc -->         <filter name="arabicNormalization"/>         <filter name="arabicStem"/>       </analyzer>     </fieldType>      <!-- Bulgarian -->     <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>     <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" />         <filter name="bulgarianStem"/>       </analyzer>     </fieldType>          <!-- Catalan -->     <dynamicField name="*_txt_ca" type="text_ca"  indexed="true"  stored="true"/>     <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes l', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_ca.txt"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" />         <filter name="snowballPorter" language="Catalan"/>       </analyzer>     </fieldType>          <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->     <dynamicField name="*_txt_cjk" type="text_cjk"  indexed="true"  stored="true"/>     <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="standard"/>         <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->         <filter name="CJKWidth"/>         <!-- for any non-CJK -->         <filter name="lowercase"/>         <filter name="CJKBigram"/>       </analyzer>     </fieldType>      <!-- Czech -->     <dynamicField name="*_txt_cz" type="text_cz"  indexed="true"  stored="true"/>     <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_cz.txt" />         <filter name="czechStem"/>       </analyzer>     </fieldType>          <!-- Danish -->     <dynamicField name="*_txt_da" type="text_da"  indexed="true"  stored="true"/>     <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />         <filter name="snowballPorter" language="Danish"/>       </analyzer>     </fieldType>          <!-- German -->     <dynamicField name="*_txt_de" type="text_de"  indexed="true"  stored="true"/>     <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />         <filter name="germanNormalization"/>         <filter name="germanLightStem"/>         <!-- less aggressive: <filter name="germanMinimalStem"/> -->         <!-- more aggressive: <filter name="snowballPorter" language="German2"/> -->       </analyzer>     </fieldType>          <!-- Greek -->     <dynamicField name="*_txt_el" type="text_el"  indexed="true"  stored="true"/>     <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- greek specific lowercase for sigma -->         <filter name="greekLowercase"/>         <filter name="stop" ignoreCase="false" words="lang/stopwords_el.txt" />         <filter name="greekStem"/>       </analyzer>     </fieldType>          <!-- Spanish -->     <dynamicField name="*_txt_es" type="text_es"  indexed="true"  stored="true"/>     <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />         <filter name="spanishLightStem"/>         <!-- more aggressive: <filter name="snowballPorter" language="Spanish"/> -->       </analyzer>     </fieldType>      <!-- Estonian -->     <dynamicField name="*_txt_et" type="text_et"  indexed="true"  stored="true"/>     <fieldType name="text_et" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_et.txt" />         <filter name="snowballPorter" language="Estonian"/>       </analyzer>     </fieldType>      <!-- Basque -->     <dynamicField name="*_txt_eu" type="text_eu"  indexed="true"  stored="true"/>     <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_eu.txt" />         <filter name="snowballPorter" language="Basque"/>       </analyzer>     </fieldType>          <!-- Persian -->     <dynamicField name="*_txt_fa" type="text_fa"  indexed="true"  stored="true"/>     <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- for ZWNJ -->         <charFilter name="persian"/>         <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="arabicNormalization"/>         <filter name="persianNormalization"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_fa.txt" />       </analyzer>     </fieldType>          <!-- Finnish -->     <dynamicField name="*_txt_fi" type="text_fi"  indexed="true"  stored="true"/>     <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />         <filter name="snowballPorter" language="Finnish"/>         <!-- less aggressive: <filter name="finnishLightStem"/> -->       </analyzer>     </fieldType>          <!-- French -->     <dynamicField name="*_txt_fr" type="text_fr"  indexed="true"  stored="true"/>     <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes l', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_fr.txt"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />         <filter name="frenchLightStem"/>         <!-- less aggressive: <filter name="frenchMinimalStem"/> -->         <!-- more aggressive: <filter name="snowballPorter" language="French"/> -->       </analyzer>     </fieldType>          <!-- Irish -->     <dynamicField name="*_txt_ga" type="text_ga"  indexed="true"  stored="true"/>     <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes d', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_ga.txt"/>         <!-- removes n-, etc. position increments is intentionally false! -->         <filter name="stop" ignoreCase="true" words="lang/hyphenations_ga.txt"/>         <filter name="irishLowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ga.txt"/>         <filter name="snowballPorter" language="Irish"/>       </analyzer>     </fieldType>          <!-- Galician -->     <dynamicField name="*_txt_gl" type="text_gl"  indexed="true"  stored="true"/>     <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_gl.txt" />         <filter name="galicianStem"/>         <!-- less aggressive: <filter name="galicianMinimalStem"/> -->       </analyzer>     </fieldType>          <!-- Hindi -->     <dynamicField name="*_txt_hi" type="text_hi"  indexed="true"  stored="true"/>     <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <!-- normalizes unicode representation -->         <filter name="indicNormalization"/>         <!-- normalizes variation in spelling -->         <filter name="hindiNormalization"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_hi.txt" />         <filter name="hindiStem"/>       </analyzer>     </fieldType>          <!-- Hungarian -->     <dynamicField name="*_txt_hu" type="text_hu"  indexed="true"  stored="true"/>     <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />         <filter name="snowballPorter" language="Hungarian"/>         <!-- less aggressive: <filter name="hungarianLightStem"/> -->       </analyzer>     </fieldType>          <!-- Armenian -->     <dynamicField name="*_txt_hy" type="text_hy"  indexed="true"  stored="true"/>     <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_hy.txt" />         <filter name="snowballPorter" language="Armenian"/>       </analyzer>     </fieldType>          <!-- Indonesian -->     <dynamicField name="*_txt_id" type="text_id"  indexed="true"  stored="true"/>     <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_id.txt" />         <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->         <filter name="indonesianStem" stemDerivational="true"/>       </analyzer>     </fieldType>          <!-- Italian -->   <dynamicField name="*_txt_it" type="text_it"  indexed="true"  stored="true"/>   <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <!-- removes l', etc -->         <filter name="elision" ignoreCase="true" articles="lang/contractions_it.txt"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />         <filter name="italianLightStem"/>         <!-- more aggressive: <filter name="snowballPorter" language="Italian"/> -->       </analyzer>     </fieldType>          <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)           NOTE: If you want to optimize search for precision, use default operator AND in your request          handler config (q.op) Use OR if you would like to optimize for recall (default).     -->     <dynamicField name="*_txt_ja" type="text_ja"  indexed="true"  stored="true"/>     <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">       <analyzer>         <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)             Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic            is used to segment compounds into its parts and the compound itself is kept as synonym.             Valid values for attribute mode are:               normal: regular segmentation               search: segmentation useful for search with synonyms compounds (default)             extended: same as search mode, but unigrams unknown words (experimental)             For some applications it might be good to use search mode for indexing and normal mode for            queries to reduce recall and prevent parts of compounds from being matched and highlighted.            Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.             Kuromoji also has a convenient user dictionary feature that allows overriding the statistical            model with your own entries for segmentation, part-of-speech tags and readings without a need            to specify weights.  Notice that user dictionaries have not been subject to extensive testing.             User dictionary attributes are:                      userDictionary: user dictionary filename              userDictionaryEncoding: user dictionary encoding (default is UTF-8)             See lang/userdict_ja.txt for a sample user dictionary file.             Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.         -->         <tokenizer name="japanese" mode="search"/>         <!--<tokenizer name="japanese" mode="search" userDictionary="lang/userdict_ja.txt"/>-->         <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->         <filter name="japaneseBaseForm"/>         <!-- Removes tokens with certain part-of-speech tags -->         <filter name="japanesePartOfSpeechStop" tags="lang/stoptags_ja.txt" />         <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->         <filter name="cjkWidth"/>         <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->         <filter name="stop" ignoreCase="true" words="lang/stopwords_ja.txt" />         <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->         <filter name="japaneseKatakanaStem" minimumLength="4"/>         <!-- Lower-cases romaji characters -->         <filter name="lowercase"/>       </analyzer>     </fieldType>          <!-- Korean morphological analysis -->     <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>     <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)           The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.           It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.            This dictionary was built with MeCab, it defines a format for the features adapted           for the Korean language.                      Nori also has a convenient user dictionary feature that allows overriding the statistical           model with your own entries for segmentation, part-of-speech tags and readings without a need           to specify weights. Notice that user dictionaries have not been subject to extensive testing.            The tokenizer supports multiple schema attributes:             * userDictionary: User dictionary path.             * userDictionaryEncoding: User dictionary encoding.             * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.             * outputUnknownUnigrams: If true outputs unigrams for unknown words.         -->         <tokenizer name="korean" decompoundMode="discard" outputUnknownUnigrams="false"/>         <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',           listing the tags to remove. By default it removes:            E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV           This is basically an equivalent to stemming.         -->         <filter name="koreanPartOfSpeechStop" />         <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->         <filter name="koreanReadingForm" />         <filter name="lowercase" />       </analyzer>     </fieldType>      <!-- Latvian -->     <dynamicField name="*_txt_lv" type="text_lv"  indexed="true"  stored="true"/>     <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_lv.txt" />         <filter name="latvianStem"/>       </analyzer>     </fieldType>          <!-- Dutch -->     <dynamicField name="*_txt_nl" type="text_nl"  indexed="true"  stored="true"/>     <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />         <filter name="stemmerOverride" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>         <filter name="snowballPorter" language="Dutch"/>       </analyzer>     </fieldType>          <!-- Norwegian -->     <dynamicField name="*_txt_no" type="text_no"  indexed="true"  stored="true"/>     <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />         <filter name="snowballPorter" language="Norwegian"/>         <!-- less aggressive: <filter name="norwegianLightStem"/> -->         <!-- singular/plural: <filter name="norwegianMinimalStem"/> -->       </analyzer>     </fieldType>          <!-- Portuguese -->   <dynamicField name="*_txt_pt" type="text_pt"  indexed="true"  stored="true"/>   <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />         <filter name="portugueseLightStem"/>         <!-- less aggressive: <filter name="portugueseMinimalStem"/> -->         <!-- more aggressive: <filter name="snowballPorter" language="Portuguese"/> -->         <!-- most aggressive: <filter name="portugueseStem"/> -->       </analyzer>     </fieldType>          <!-- Romanian -->     <dynamicField name="*_txt_ro" type="text_ro"  indexed="true"  stored="true"/>     <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ro.txt" />         <filter name="snowballPorter" language="Romanian"/>       </analyzer>     </fieldType>          <!-- Russian -->     <dynamicField name="*_txt_ru" type="text_ru"  indexed="true"  stored="true"/>     <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />         <filter name="snowballPorter" language="Russian"/>         <!-- less aggressive: <filter name="russianLightStem"/> -->       </analyzer>     </fieldType>          <!-- Swedish -->     <dynamicField name="*_txt_sv" type="text_sv"  indexed="true"  stored="true"/>     <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />         <filter name="snowballPorter" language="Swedish"/>         <!-- less aggressive: <filter name="swedishLightStem"/> -->       </analyzer>     </fieldType>          <!-- Thai -->     <dynamicField name="*_txt_th" type="text_th"  indexed="true"  stored="true"/>     <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer name="thai"/>         <filter name="lowercase"/>         <filter name="stop" ignoreCase="true" words="lang/stopwords_th.txt" />       </analyzer>     </fieldType>          <!-- Turkish -->     <dynamicField name="*_txt_tr" type="text_tr"  indexed="true"  stored="true"/>     <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer name="standard"/>         <filter name="turkishLowercase"/>         <filter name="stop" ignoreCase="false" words="lang/stopwords_tr.txt" />         <filter name="snowballPorter]" language="Turkish"...> but was:<...         <tokenizer [class="solr.WhitespaceTokenizerFactory"/>       </analyzer>     </fieldType>      <!-- A general text field that has reasonable, generic          cross-language defaults: it tokenizes with StandardTokenizer,         removes stop words from case-insensitive "stopwords.txt"         (empty by default), and down cases.  At query time only, it         also applies synonyms.    -->     <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <!-- in this example, we will only use synonyms at query time         <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.FlattenGraphFilterFactory"/>         -->         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>           <!-- SortableTextField generaly functions exactly like TextField,          except that it supports, and by default uses, docValues for sorting (or faceting)          on the first 1024 characters of the original field values (which is configurable).                    This makes it a bit more useful then TextField in many situations, but the trade-off          is that it takes up more space on disk; which is why it's not used in place of TextField          for every fieldType in this _default schema.    -->     <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/>     <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/>     <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,          removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and          finally applies Porter's stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->     <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>     <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- in this example, we will only use synonyms at query time         <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.FlattenGraphFilterFactory"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"             />         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.EnglishPossessiveFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter class="solr.EnglishMinimalStemFilterFactory"/>        -->         <filter class="solr.PorterStemFilterFactory"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.EnglishPossessiveFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:         <filter class="solr.EnglishMinimalStemFilterFactory"/>        -->         <filter class="solr.PorterStemFilterFactory"/>       </analyzer>     </fieldType>      <!-- A text field with defaults appropriate for English, plus          aggressive word-splitting and autophrase features enabled.          This field is just like text_en, except it adds          WordDelimiterGraphFilter to enable splitting and matching of          words on case-change, alpha numeric boundaries, and          non-alphanumeric chars.  This means certain compound word          cases will work, for example query "wi fi" will match          document "WiFi" or "wi-fi".     -->     <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <!-- in this example, we will only use synonyms at query time         <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>         -->         <!-- Case insensitive stop word removal.         -->         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.PorterStemFilterFactory"/>         <filter class="solr.FlattenGraphFilterFactory" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.StopFilterFactory"                 ignoreCase="true"                 words="lang/stopwords_en.txt"         />         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.PorterStemFilterFactory"/>       </analyzer>     </fieldType>      <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,          but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->     <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>     <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">       <analyzer type="index">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.EnglishMinimalStemFilterFactory"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>         <filter class="solr.FlattenGraphFilterFactory" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>         <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>         <filter class="solr.EnglishMinimalStemFilterFactory"/>         <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes              possible with WordDelimiterGraphFilter in conjuncton with stemming. -->         <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>       </analyzer>     </fieldType>      <!-- Just like text_general except it reverses the characters of         each token, to enable more efficient leading wildcard queries.     -->     <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>     <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">       <analyzer type="index">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"                 maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>       </analyzer>       <analyzer type="query">         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>      <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>     <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >       <analyzer>         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>       </analyzer>     </fieldType>      <!-- lowercases the entire field value, keeping it as a single token.  -->     <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>     <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.KeywordTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory" />       </analyzer>     </fieldType>      <!--        Example of using PathHierarchyTokenizerFactory at index time, so       queries for paths match documents at that path, or in descendent paths     -->     <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>     <fieldType name="descendent_path" class="solr.TextField">       <analyzer type="index">         <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.KeywordTokenizerFactory" />       </analyzer>     </fieldType>      <!--       Example of using PathHierarchyTokenizerFactory at query time, so       queries for paths match documents at that path, or in ancestor paths     -->     <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>     <fieldType name="ancestor_path" class="solr.TextField">       <analyzer type="index">         <tokenizer class="solr.KeywordTokenizerFactory" />       </analyzer>       <analyzer type="query">         <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />       </analyzer>     </fieldType>      <!-- This point type indexes the coordinates as separate fields (subFields)       If subFieldType is defined, it references a type, and a dynamic field       definition is created matching *___<typename>.  Alternately, if        subFieldSuffix is defined, that is used to create the subFields.       Example: if subFieldType="double", then the coordinates would be         indexed in fields myloc_0___double,myloc_1___double.       Example: if subFieldSuffix="_d" then the coordinates would be indexed         in fields myloc_0_d,myloc_1_d       The subFields are an implementation detail of the fieldType, and end       users normally should not need to know about them.      -->     <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>     <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>      <!-- A specialized field for geospatial search filters and distance sorting. -->     <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>      <!-- A geospatial field type that supports multiValued and polygon shapes.       For more information about this and other spatial fields see:       http://lucene.apache.org/solr/guide/spatial-search.html     -->     <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"                geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />      <!-- Payloaded field types -->     <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer"/>       </analyzer>     </fieldType>     <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">       <analyzer>         <tokenizer class="solr.WhitespaceTokenizerFactory"/>         <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="identity"/>       </analyzer>     </fieldType>      <!-- some examples for different languages (generally ordered by ISO code) -->      <!-- Arabic -->     <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>     <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- for any non-arabic -->         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ar.txt" />         <!-- normalizes ﻯ to ﻱ, etc -->         <filter class="solr.ArabicNormalizationFilterFactory"/>         <filter class="solr.ArabicStemFilterFactory"/>       </analyzer>     </fieldType>      <!-- Bulgarian -->     <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>     <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>          <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_bg.txt" />          <filter class="solr.BulgarianStemFilterFactory"/>              </analyzer>     </fieldType>          <!-- Catalan -->     <dynamicField name="*_txt_ca" type="text_ca"  indexed="true"  stored="true"/>     <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes l', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ca.txt"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ca.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Catalan"/>              </analyzer>     </fieldType>          <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->     <dynamicField name="*_txt_cjk" type="text_cjk"  indexed="true"  stored="true"/>     <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->         <filter class="solr.CJKWidthFilterFactory"/>         <!-- for any non-CJK -->         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.CJKBigramFilterFactory"/>       </analyzer>     </fieldType>      <!-- Czech -->     <dynamicField name="*_txt_cz" type="text_cz"  indexed="true"  stored="true"/>     <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_cz.txt" />         <filter class="solr.CzechStemFilterFactory"/>              </analyzer>     </fieldType>          <!-- Danish -->     <dynamicField name="*_txt_da" type="text_da"  indexed="true"  stored="true"/>     <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Danish"/>              </analyzer>     </fieldType>          <!-- German -->     <dynamicField name="*_txt_de" type="text_de"  indexed="true"  stored="true"/>     <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />         <filter class="solr.GermanNormalizationFilterFactory"/>         <filter class="solr.GermanLightStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.GermanMinimalStemFilterFactory"/> -->         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="German2"/> -->       </analyzer>     </fieldType>          <!-- Greek -->     <dynamicField name="*_txt_el" type="text_el"  indexed="true"  stored="true"/>     <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- greek specific lowercase for sigma -->         <filter class="solr.GreekLowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_el.txt" />         <filter class="solr.GreekStemFilterFactory"/>       </analyzer>     </fieldType>          <!-- Spanish -->     <dynamicField name="*_txt_es" type="text_es"  indexed="true"  stored="true"/>     <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />         <filter class="solr.SpanishLightStemFilterFactory"/>         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Spanish"/> -->       </analyzer>     </fieldType>      <!-- Estonian -->     <dynamicField name="*_txt_et" type="text_et"  indexed="true"  stored="true"/>     <fieldType name="text_et" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_et.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Estonian"/>       </analyzer>     </fieldType>      <!-- Basque -->     <dynamicField name="*_txt_eu" type="text_eu"  indexed="true"  stored="true"/>     <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_eu.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Basque"/>       </analyzer>     </fieldType>          <!-- Persian -->     <dynamicField name="*_txt_fa" type="text_fa"  indexed="true"  stored="true"/>     <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- for ZWNJ -->         <charFilter class="solr.PersianCharFilterFactory"/>         <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.ArabicNormalizationFilterFactory"/>         <filter class="solr.PersianNormalizationFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fa.txt" />       </analyzer>     </fieldType>          <!-- Finnish -->     <dynamicField name="*_txt_fi" type="text_fi"  indexed="true"  stored="true"/>     <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Finnish"/>         <!-- less aggressive: <filter class="solr.FinnishLightStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- French -->     <dynamicField name="*_txt_fr" type="text_fr"  indexed="true"  stored="true"/>     <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes l', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_fr.txt"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />         <filter class="solr.FrenchLightStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.FrenchMinimalStemFilterFactory"/> -->         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="French"/> -->       </analyzer>     </fieldType>          <!-- Irish -->     <dynamicField name="*_txt_ga" type="text_ga"  indexed="true"  stored="true"/>     <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes d', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ga.txt"/>         <!-- removes n-, etc. position increments is intentionally false! -->         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/hyphenations_ga.txt"/>         <filter class="solr.IrishLowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ga.txt"/>         <filter class="solr.SnowballPorterFilterFactory" language="Irish"/>       </analyzer>     </fieldType>          <!-- Galician -->     <dynamicField name="*_txt_gl" type="text_gl"  indexed="true"  stored="true"/>     <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_gl.txt" />         <filter class="solr.GalicianStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.GalicianMinimalStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Hindi -->     <dynamicField name="*_txt_hi" type="text_hi"  indexed="true"  stored="true"/>     <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <!-- normalizes unicode representation -->         <filter class="solr.IndicNormalizationFilterFactory"/>         <!-- normalizes variation in spelling -->         <filter class="solr.HindiNormalizationFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hi.txt" />         <filter class="solr.HindiStemFilterFactory"/>       </analyzer>     </fieldType>          <!-- Hungarian -->     <dynamicField name="*_txt_hu" type="text_hu"  indexed="true"  stored="true"/>     <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Hungarian"/>         <!-- less aggressive: <filter class="solr.HungarianLightStemFilterFactory"/> -->          </analyzer>     </fieldType>          <!-- Armenian -->     <dynamicField name="*_txt_hy" type="text_hy"  indexed="true"  stored="true"/>     <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Armenian"/>       </analyzer>     </fieldType>          <!-- Indonesian -->     <dynamicField name="*_txt_id" type="text_id"  indexed="true"  stored="true"/>     <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_id.txt" />         <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->         <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true"/>       </analyzer>     </fieldType>          <!-- Italian -->   <dynamicField name="*_txt_it" type="text_it"  indexed="true"  stored="true"/>   <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <!-- removes l', etc -->         <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_it.txt"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />         <filter class="solr.ItalianLightStemFilterFactory"/>         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Italian"/> -->       </analyzer>     </fieldType>          <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)           NOTE: If you want to optimize search for precision, use default operator AND in your request          handler config (q.op) Use OR if you would like to optimize for recall (default).     -->     <dynamicField name="*_txt_ja" type="text_ja"  indexed="true"  stored="true"/>     <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">       <analyzer>         <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)             Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic            is used to segment compounds into its parts and the compound itself is kept as synonym.             Valid values for attribute mode are:               normal: regular segmentation               search: segmentation useful for search with synonyms compounds (default)             extended: same as search mode, but unigrams unknown words (experimental)             For some applications it might be good to use search mode for indexing and normal mode for            queries to reduce recall and prevent parts of compounds from being matched and highlighted.            Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.             Kuromoji also has a convenient user dictionary feature that allows overriding the statistical            model with your own entries for segmentation, part-of-speech tags and readings without a need            to specify weights.  Notice that user dictionaries have not been subject to extensive testing.             User dictionary attributes are:                      userDictionary: user dictionary filename              userDictionaryEncoding: user dictionary encoding (default is UTF-8)             See lang/userdict_ja.txt for a sample user dictionary file.             Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.         -->         <tokenizer class="solr.JapaneseTokenizerFactory" mode="search"/>         <!--<tokenizer class="solr.JapaneseTokenizerFactory" mode="search" userDictionary="lang/userdict_ja.txt"/>-->         <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->         <filter class="solr.JapaneseBaseFormFilterFactory"/>         <!-- Removes tokens with certain part-of-speech tags -->         <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt" />         <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->         <filter class="solr.CJKWidthFilterFactory"/>         <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt" />         <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->         <filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/>         <!-- Lower-cases romaji characters -->         <filter class="solr.LowerCaseFilterFactory"/>       </analyzer>     </fieldType>          <!-- Korean morphological analysis -->     <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>     <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <!-- Nori Korean morphological analyzer/tokenizer (KoreanTokenizer)           The Korean (nori) analyzer integrates Lucene nori analysis module into Solr.           It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.            This dictionary was built with MeCab, it defines a format for the features adapted           for the Korean language.                      Nori also has a convenient user dictionary feature that allows overriding the statistical           model with your own entries for segmentation, part-of-speech tags and readings without a need           to specify weights. Notice that user dictionaries have not been subject to extensive testing.            The tokenizer supports multiple schema attributes:             * userDictionary: User dictionary path.             * userDictionaryEncoding: User dictionary encoding.             * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'.             * outputUnknownUnigrams: If true outputs unigrams for unknown words.         -->         <tokenizer class="solr.KoreanTokenizerFactory" decompoundMode="discard" outputUnknownUnigrams="false"/>         <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',           listing the tags to remove. By default it removes:            E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV           This is basically an equivalent to stemming.         -->         <filter class="solr.KoreanPartOfSpeechStopFilterFactory" />         <!-- Replaces term text with the Hangul transcription of Hanja characters, if applicable: -->         <filter class="solr.KoreanReadingFormFilterFactory" />         <filter class="solr.LowerCaseFilterFactory" />       </analyzer>     </fieldType>      <!-- Latvian -->     <dynamicField name="*_txt_lv" type="text_lv"  indexed="true"  stored="true"/>     <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_lv.txt" />         <filter class="solr.LatvianStemFilterFactory"/>       </analyzer>     </fieldType>          <!-- Dutch -->     <dynamicField name="*_txt_nl" type="text_nl"  indexed="true"  stored="true"/>     <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />         <filter class="solr.StemmerOverrideFilterFactory" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>         <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/>       </analyzer>     </fieldType>          <!-- Norwegian -->     <dynamicField name="*_txt_no" type="text_no"  indexed="true"  stored="true"/>     <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Norwegian"/>         <!-- less aggressive: <filter class="solr.NorwegianLightStemFilterFactory"/> -->         <!-- singular/plural: <filter class="solr.NorwegianMinimalStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Portuguese -->   <dynamicField name="*_txt_pt" type="text_pt"  indexed="true"  stored="true"/>   <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />         <filter class="solr.PortugueseLightStemFilterFactory"/>         <!-- less aggressive: <filter class="solr.PortugueseMinimalStemFilterFactory"/> -->         <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Portuguese"/> -->         <!-- most aggressive: <filter class="solr.PortugueseStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Romanian -->     <dynamicField name="*_txt_ro" type="text_ro"  indexed="true"  stored="true"/>     <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ro.txt" />         <filter class="solr.SnowballPorterFilterFactory" language="Romanian"/>       </analyzer>     </fieldType>          <!-- Russian -->     <dynamicField name="*_txt_ru" type="text_ru"  indexed="true"  stored="true"/>     <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Russian"/>         <!-- less aggressive: <filter class="solr.RussianLightStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Swedish -->     <dynamicField name="*_txt_sv" type="text_sv"  indexed="true"  stored="true"/>     <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />         <filter class="solr.SnowballPorterFilterFactory" language="Swedish"/>         <!-- less aggressive: <filter class="solr.SwedishLightStemFilterFactory"/> -->       </analyzer>     </fieldType>          <!-- Thai -->     <dynamicField name="*_txt_th" type="text_th"  indexed="true"  stored="true"/>     <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">       <analyzer>         <tokenizer class="solr.ThaiTokenizerFactory"/>         <filter class="solr.LowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_th.txt" />       </analyzer>     </fieldType>          <!-- Turkish -->     <dynamicField name="*_txt_tr" type="text_tr"  indexed="true"  stored="true"/>     <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">       <analyzer>          <tokenizer class="solr.StandardTokenizerFactory"/>         <filter class="solr.TurkishLowerCaseFilterFactory"/>         <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_tr.txt" />         <filter class="solr.SnowballPorterFilterFactory]" language="Turkish"...>

Stack Trace:
org.junit.ComparisonFailure: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/configsets/_default/conf/managed-schema contents doesn't match expected (/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/server/solr/configsets/_default/conf/managed-schema) expected:<...
        <tokenizer [name="whitespace"/>
      </analyzer>
    </fieldType>

    <!-- A general text field that has reasonable, generic
         cross-language defaults: it tokenizes with StandardTokenizer,
               removes stop words from case-insensitive "stopwords.txt"
               (empty by default), and down cases.  At query time only, it
               also applies synonyms.
          -->
    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <!-- in this example, we will only use synonyms at query time
        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="flattenGraph"/>
        -->
        <filter name="lowercase"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="lowercase"/>
      </analyzer>
    </fieldType>

   
    <!-- SortableTextField generaly functions exactly like TextField,
         except that it supports, and by default uses, docValues for sorting (or faceting)
         on the first 1024 characters of the original field values (which is configurable).
         
         This makes it a bit more useful then TextField in many situations, but the trade-off
         is that it takes up more space on disk; which is why it's not used in place of TextField
         for every fieldType in this _default schema.
          -->
    <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/>
    <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/>
    <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="lowercase"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="lowercase"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,
         removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
         finally applies Porter's stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->
    <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>
    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <!-- in this example, we will only use synonyms at query time
        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="flattenGraph"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
            />
        <filter name="lowercase"/>
        <filter name="englishPossessive"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
        <filter name="englishMinimalStem"/>
              -->
        <filter name="porterStem"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
        />
        <filter name="lowercase"/>
        <filter name="englishPossessive"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
        <filter name="englishMinimalStem"/>
              -->
        <filter name="porterStem"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English, plus
         aggressive word-splitting and autophrase features enabled.
         This field is just like text_en, except it adds
         WordDelimiterGraphFilter to enable splitting and matching of
         words on case-change, alpha numeric boundaries, and
         non-alphanumeric chars.  This means certain compound word
         cases will work, for example query "wi fi" will match
         document "WiFi" or "wi-fi".
    -->
    <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>
    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
      <analyzer type="index">
        <tokenizer name="whitespace"/>
        <!-- in this example, we will only use synonyms at query time
        <filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
        />
        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="porterStem"/>
        <filter name="flattenGraph" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="whitespace"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="stop"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
        />
        <filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="porterStem"/>
      </analyzer>
    </fieldType>

    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
    <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>
    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
      <analyzer type="index">
        <tokenizer name="whitespace"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="englishMinimalStem"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
        <filter name="removeDuplicates"/>
        <filter name="flattenGraph" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="whitespace"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
        <filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
        <filter name="lowercase"/>
        <filter name="keywordMarker" protected="protwords.txt"/>
        <filter name="englishMinimalStem"/>
        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
             possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
        <filter name="removeDuplicates"/>
      </analyzer>
    </fieldType>

    <!-- Just like text_general except it reverses the characters of
               each token, to enable more efficient leading wildcard queries.
    -->
    <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>
    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer name="standard"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="lowercase"/>
        <filter name="reversedWildcard" withOriginal="true"
                maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer name="standard"/>
        <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter name="stop" ignoreCase="true" words="stopwords.txt" />
        <filter name="lowercase"/>
      </analyzer>
    </fieldType>

    <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>
    <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >
      <analyzer>
        <tokenizer name="standard"/>
        <filter name="doubleMetaphone" inject="false"/>
      </analyzer>
    </fieldType>

    <!-- lowercases the entire field value, keeping it as a single token.  -->
    <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>
    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer name="keyword"/>
        <filter name="lowercase" />
      </analyzer>
    </fieldType>

    <!--
      Example of using PathHierarchyTokenizerFactory at index time, so
      queries for paths match documents at that path, or in descendent paths
    -->
    <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>
    <fieldType name="descendent_path" class="solr.TextField">
      <analyzer type="index">
        <tokenizer name="pathHierarchy" delimiter="/" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="keyword" />
      </analyzer>
    </fieldType>

    <!--
      Example of using PathHierarchyTokenizerFactory at query time, so
      queries for paths match documents at that path, or in ancestor paths
    -->
    <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>
    <fieldType name="ancestor_path" class="solr.TextField">
      <analyzer type="index">
        <tokenizer name="keyword" />
      </analyzer>
      <analyzer type="query">
        <tokenizer name="pathHierarchy" delimiter="/" />
      </analyzer>
    </fieldType>

    <!-- This point type indexes the coordinates as separate fields (subFields)
      If subFieldType is defined, it references a type, and a dynamic field
      definition is created matching *___<typename>.  Alternately, if
      subFieldSuffix is defined, that is used to create the subFields.
      Example: if subFieldType="double", then the coordinates would be
        indexed in fields myloc_0___double,myloc_1___double.
      Example: if subFieldSuffix="_d" then the coordinates would be indexed
        in fields myloc_0_d,myloc_1_d
      The subFields are an implementation detail of the fieldType, and end
      users normally should not need to know about them.
     -->
    <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>
    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>

    <!-- A specialized field for geospatial search filters and distance sorting. -->
    <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>

    <!-- A geospatial field type that supports multiValued and polygon shapes.
      For more information about this and other spatial fields see:
      http://lucene.apache.org/solr/guide/spatial-search.html
    -->
    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
               geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />

    <!-- Payloaded field types -->
    <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">
      <analyzer>
        <tokenizer name="whitespace"/>
        <filter name="delimitedPayload" encoder="float"/>
      </analyzer>
    </fieldType>
    <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">
      <analyzer>
        <tokenizer name="whitespace"/>
        <filter name="delimitedPayload" encoder="integer"/>
      </analyzer>
    </fieldType>
    <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">
      <analyzer>
        <tokenizer name="whitespace"/>
        <filter name="delimitedPayload" encoder="identity"/>
      </analyzer>
    </fieldType>

    <!-- some examples for different languages (generally ordered by ISO code) -->

    <!-- Arabic -->
    <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>
    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer name="standard"/>
        <!-- for any non-arabic -->
        <filter name="lowercase"/>
        <filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />
        <!-- normalizes ﻯ to ﻱ, etc -->
        <filter name="arabicNormalization"/>
        <filter name="arabicStem"/>
      </analyzer>
    </fieldType>

    <!-- Bulgarian -->
    <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>
    <fieldType name="text_bg" class="solr.TextField" positionIncrement

[...truncated too long message...]

ts length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null619305024
     [copy] Copying 249 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null619305024
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null619305024

check-working-copy:
[ivy:cachepath] :: resolving dependencies :: #;working@lucene1-us-west
[ivy:cachepath] confs: [default]
[ivy:cachepath] found org.eclipse.jgit#org.eclipse.jgit;5.3.0.201903130848-r in public
[ivy:cachepath] found com.jcraft#jsch;0.1.54 in public
[ivy:cachepath] found com.jcraft#jzlib;1.1.1 in public
[ivy:cachepath] found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath] found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] found org.bouncycastle#bcpg-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcprov-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcpkix-jdk15on;1.60 in public
[ivy:cachepath] found org.slf4j#slf4j-nop;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 56ms :: artifacts dl 3ms
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   9   |   0   |   0   |   0   ||   9   |   0   |
        ---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] Checking working copy status...

-jenkins-base:

BUILD SUCCESSFUL
Total time: 106 minutes 55 seconds
Archiving artifacts
java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath$ValidateAntFileMask.hasMatch(FilePath.java:2847)
        at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2726)
        at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2707)
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene
                at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
                at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
                at hudson.remoting.Channel.call(Channel.java:955)
                at hudson.FilePath.act(FilePath.java:1072)
                at hudson.FilePath.act(FilePath.java:1061)
                at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
                at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
                at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
                at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
                at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
                at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
                at hudson.model.Build$BuildExecution.post2(Build.java:186)
                at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
                at hudson.model.Run.execute(Run.java:1835)
                at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
                at hudson.model.ResourceController.execute(ResourceController.java:97)
                at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.FilePath$TunneledInterruptedException
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3088)
        at hudson.remoting.UserRequest.perform(UserRequest.java:212)
        at hudson.remoting.UserRequest.perform(UserRequest.java:54)
        at hudson.remoting.Request$2.run(Request.java:369)
        at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:744)
Caused: java.lang.InterruptedException: java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath.act(FilePath.java:1074)
        at hudson.FilePath.act(FilePath.java:1061)
        at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
        at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
        at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
        at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
        at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
        at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
        at hudson.model.Build$BuildExecution.post2(Build.java:186)
        at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
        at hudson.model.Run.execute(Run.java:1835)
        at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
        at hudson.model.ResourceController.execute(ResourceController.java:97)
        at hudson.model.Executor.run(Executor.java:429)
No artifacts found that match the file pattern "**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]