Solr 7.2 cannot see all running nodes

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Solr 7.2 cannot see all running nodes

Abhi Basu
What am I missing? I used the following instructions
http://blog.thedigitalgroup.com/susheelk/2015/08/03/solrcloud-2-nodes-solr-1-node-zk-setup/#comment-4321
on 4  nodes. The only difference is I have 3 external zk servers. So this
is how I am starting each solr node:

./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g

They all run without any errors, but when trying to create a collection
with 2S/2R, I get an error saying only one node is running.

./server/scripts/cloud-scripts/zkcli.sh -zkhost
zk0-esohad,zk1-esohad,zk3-esohad:2181 -cmd upconfig -confname
ems-collection -confdir
/usr/local/bin/solr-7.2.1/server/solr/configsets/ems-collection-72_configs/conf


"Operation create caused
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
and the number of nodes currently live or live and part of your
createNodeSet is 1. This allows a maximum of 1 to be created. Value of
numShards is 2, value of nrtReplicas is 2, value of tlogReplicas is 0 and
value of pullReplicas is 0. This requires 4 shards to be created (higher
than the allowed number)",


Any ideas?

Thanks,

Abhi

--
Abhi Basu
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

GaneshSe
may be you can check int he Admin UI --> Cloud --> Tree --> /live_nodes. To
see the list of live nodes before running. If it is less than what you
expected, check the Zoo keeper logs? or make sure connectivity between the
shards and zookeeper.

On Thu, Mar 29, 2018 at 10:25 AM, Abhi Basu <[hidden email]> wrote:

> What am I missing? I used the following instructions
> http://blog.thedigitalgroup.com/susheelk/2015/08/03/
> solrcloud-2-nodes-solr-1-node-zk-setup/#comment-4321
> on 4  nodes. The only difference is I have 3 external zk servers. So this
> is how I am starting each solr node:
>
> ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
> 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>
> They all run without any errors, but when trying to create a collection
> with 2S/2R, I get an error saying only one node is running.
>
> ./server/scripts/cloud-scripts/zkcli.sh -zkhost
> zk0-esohad,zk1-esohad,zk3-esohad:2181 -cmd upconfig -confname
> ems-collection -confdir
> /usr/local/bin/solr-7.2.1/server/solr/configsets/ems-
> collection-72_configs/conf
>
>
> "Operation create caused
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.
> SolrException:
> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> and the number of nodes currently live or live and part of your
> createNodeSet is 1. This allows a maximum of 1 to be created. Value of
> numShards is 2, value of nrtReplicas is 2, value of tlogReplicas is 0 and
> value of pullReplicas is 0. This requires 4 shards to be created (higher
> than the allowed number)",
>
>
> Any ideas?
>
> Thanks,
>
> Abhi
>
> --
> Abhi Basu
>
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Abhi Basu
Yes, only showing one live node on admin site.

Checking zk logs.

Thanks,

Abhi

On Thu, Mar 29, 2018 at 9:32 AM, Ganesh Sethuraman <[hidden email]>
wrote:

> may be you can check int he Admin UI --> Cloud --> Tree --> /live_nodes. To
> see the list of live nodes before running. If it is less than what you
> expected, check the Zoo keeper logs? or make sure connectivity between the
> shards and zookeeper.
>
> On Thu, Mar 29, 2018 at 10:25 AM, Abhi Basu <[hidden email]> wrote:
>
> > What am I missing? I used the following instructions
> > http://blog.thedigitalgroup.com/susheelk/2015/08/03/
> > solrcloud-2-nodes-solr-1-node-zk-setup/#comment-4321
> > on 4  nodes. The only difference is I have 3 external zk servers. So this
> > is how I am starting each solr node:
> >
> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
> -p
> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
> >
> > They all run without any errors, but when trying to create a collection
> > with 2S/2R, I get an error saying only one node is running.
> >
> > ./server/scripts/cloud-scripts/zkcli.sh -zkhost
> > zk0-esohad,zk1-esohad,zk3-esohad:2181 -cmd upconfig -confname
> > ems-collection -confdir
> > /usr/local/bin/solr-7.2.1/server/solr/configsets/ems-
> > collection-72_configs/conf
> >
> >
> > "Operation create caused
> > exception:":"org.apache.solr.common.SolrException:org.
> apache.solr.common.
> > SolrException:
> > Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> > and the number of nodes currently live or live and part of your
> > createNodeSet is 1. This allows a maximum of 1 to be created. Value of
> > numShards is 2, value of nrtReplicas is 2, value of tlogReplicas is 0 and
> > value of pullReplicas is 0. This requires 4 shards to be created (higher
> > than the allowed number)",
> >
> >
> > Any ideas?
> >
> > Thanks,
> >
> > Abhi
> >
> > --
> > Abhi Basu
> >
>



--
Abhi Basu
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Shawn Heisey-2
In reply to this post by Abhi Basu
On 3/29/2018 8:25 AM, Abhi Basu wrote:
> "Operation create caused
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> and the number of nodes currently live or live and part of your

I'm betting that all your nodes are registering themselves with the same
name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
address on the loopback interface.

Usually this problem (on an OS other than Windows, at least) is caused
by an incorrect /etc/hosts file that maps your hostname to a  loopback
address instead of a real address.

You can override the value that SolrCloud uses to register itself into
zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
I think this is the SOLR_HOST variable, which gets translated into
-Dhost=XXX on the java commandline.  It can also be configured in solr.xml.

Thanks,
Shawn

Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Abhi Basu
So, in the solr.xml on each node should I set the host to the actual host
name?

<solr>

  <solrcloud>

    <str name="host">${host:}</str>
    <int name="hostPort">${jetty.port:8983}</int>
    <str name="hostContext">${hostContext:solr}</str>

    <bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>

    <int name="zkClientTimeout">${zkClientTimeout:30000}</int>
    <int
name="distribUpdateSoTimeout">${distribUpdateSoTimeout:600000}</int>
    <int
name="distribUpdateConnTimeout">${distribUpdateConnTimeout:60000}</int>
    <str
name="zkCredentialsProvider">${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider}</str>
    <str
name="zkACLProvider">${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider}</str>

  </solrcloud>


On Thu, Mar 29, 2018 at 9:46 AM, Shawn Heisey <[hidden email]> wrote:

> On 3/29/2018 8:25 AM, Abhi Basu wrote:
>
>> "Operation create caused
>> exception:":"org.apache.solr.common.SolrException:org.apache
>> .solr.common.SolrException:
>> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
>> and the number of nodes currently live or live and part of your
>>
>
> I'm betting that all your nodes are registering themselves with the same
> name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address
> on the loopback interface.
>
> Usually this problem (on an OS other than Windows, at least) is caused by
> an incorrect /etc/hosts file that maps your hostname to a  loopback address
> instead of a real address.
>
> You can override the value that SolrCloud uses to register itself into
> zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I
> think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX
> on the java commandline.  It can also be configured in solr.xml.
>
> Thanks,
> Shawn
>
>


--
Abhi Basu
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Walter Underwood
In reply to this post by Shawn Heisey-2
I had that problem. Very annoying and we probably should require special flag to use localhost.

We need to start solr like this:

./solr start -c -h `hostname`

If anybody ever forgets, we get a 127.0.0.1 node that shows down in cluster status. No idea how to get rid of that.

wunder
Walter Underwood
[hidden email]
http://observer.wunderwood.org/  (my blog)

> On Mar 29, 2018, at 7:46 AM, Shawn Heisey <[hidden email]> wrote:
>
> On 3/29/2018 8:25 AM, Abhi Basu wrote:
>> "Operation create caused
>> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
>> and the number of nodes currently live or live and part of your
>
> I'm betting that all your nodes are registering themselves with the same name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address on the loopback interface.
>
> Usually this problem (on an OS other than Windows, at least) is caused by an incorrect /etc/hosts file that maps your hostname to a  loopback address instead of a real address.
>
> You can override the value that SolrCloud uses to register itself into zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX on the java commandline.  It can also be configured in solr.xml.
>
> Thanks,
> Shawn
>

Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

WebsterHomer
This Zookeeper ensemble doesn't look right.
>
> ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
> 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g


Shouldn't the zookeeper ensemble be specified as:
  zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181

You should put the zookeeper port on each node in the comma separated list.
I don't know if this is your problem, but I think your solr nodes will only
be connecting to 1 zookeeper

On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <[hidden email]>
wrote:

> I had that problem. Very annoying and we probably should require special
> flag to use localhost.
>
> We need to start solr like this:
>
> ./solr start -c -h `hostname`
>
> If anybody ever forgets, we get a 127.0.0.1 node that shows down in
> cluster status. No idea how to get rid of that.
>
> wunder
> Walter Underwood
> [hidden email]
> http://observer.wunderwood.org/  (my blog)
>
> > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <[hidden email]> wrote:
> >
> > On 3/29/2018 8:25 AM, Abhi Basu wrote:
> >> "Operation create caused
> >> exception:":"org.apache.solr.common.SolrException:org.
> apache.solr.common.SolrException:
> >> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> >> and the number of nodes currently live or live and part of your
> >
> > I'm betting that all your nodes are registering themselves with the same
> name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address
> on the loopback interface.
> >
> > Usually this problem (on an OS other than Windows, at least) is caused
> by an incorrect /etc/hosts file that maps your hostname to a  loopback
> address instead of a real address.
> >
> > You can override the value that SolrCloud uses to register itself into
> zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I
> think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX
> on the java commandline.  It can also be configured in solr.xml.
> >
> > Thanks,
> > Shawn
> >
>
>

--


This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient,
you must not copy this message or attachment or disclose the contents to
any other person. If you have received this transmission in error, please
notify the sender immediately and delete the message and any attachment
from your system. Merck KGaA, Darmstadt, Germany and any of its
subsidiaries do not accept liability for any omissions or errors in this
message which may arise as a result of E-Mail-transmission or for damages
resulting from any unauthorized changes of the content of this message and
any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
subsidiaries do not guarantee that this message is free of viruses and does
not accept liability for any damages caused by any virus transmitted
therewith.

Click http://www.emdgroup.com/disclaimer to access the German, French,
Spanish and Portuguese versions of this disclaimer.
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Abhi Basu
Ok, will give it a try along with the host name.


On Thu, Mar 29, 2018 at 12:20 PM, Webster Homer <[hidden email]>
wrote:

> This Zookeeper ensemble doesn't look right.
> >
> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
> -p
> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>
>
> Shouldn't the zookeeper ensemble be specified as:
>   zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
>
> You should put the zookeeper port on each node in the comma separated list.
> I don't know if this is your problem, but I think your solr nodes will only
> be connecting to 1 zookeeper
>
> On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <[hidden email]>
> wrote:
>
> > I had that problem. Very annoying and we probably should require special
> > flag to use localhost.
> >
> > We need to start solr like this:
> >
> > ./solr start -c -h `hostname`
> >
> > If anybody ever forgets, we get a 127.0.0.1 node that shows down in
> > cluster status. No idea how to get rid of that.
> >
> > wunder
> > Walter Underwood
> > [hidden email]
> > http://observer.wunderwood.org/  (my blog)
> >
> > > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <[hidden email]> wrote:
> > >
> > > On 3/29/2018 8:25 AM, Abhi Basu wrote:
> > >> "Operation create caused
> > >> exception:":"org.apache.solr.common.SolrException:org.
> > apache.solr.common.SolrException:
> > >> Cannot create collection ems-collection. Value of maxShardsPerNode is
> 1,
> > >> and the number of nodes currently live or live and part of your
> > >
> > > I'm betting that all your nodes are registering themselves with the
> same
> > name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
> address
> > on the loopback interface.
> > >
> > > Usually this problem (on an OS other than Windows, at least) is caused
> > by an incorrect /etc/hosts file that maps your hostname to a  loopback
> > address instead of a real address.
> > >
> > > You can override the value that SolrCloud uses to register itself into
> > zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
> I
> > think this is the SOLR_HOST variable, which gets translated into
> -Dhost=XXX
> > on the java commandline.  It can also be configured in solr.xml.
> > >
> > > Thanks,
> > > Shawn
> > >
> >
> >
>
> --
>
>
> This message and any attachment are confidential and may be privileged or
> otherwise protected from disclosure. If you are not the intended recipient,
> you must not copy this message or attachment or disclose the contents to
> any other person. If you have received this transmission in error, please
> notify the sender immediately and delete the message and any attachment
> from your system. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not accept liability for any omissions or errors in this
> message which may arise as a result of E-Mail-transmission or for damages
> resulting from any unauthorized changes of the content of this message and
> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not guarantee that this message is free of viruses and does
> not accept liability for any damages caused by any virus transmitted
> therewith.
>
> Click http://www.emdgroup.com/disclaimer to access the German, French,
> Spanish and Portuguese versions of this disclaimer.
>



--
Abhi Basu
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Abhi Basu
Also, another question, where it says to copy the zoo.cfg from
/solr72/server/solr folder to /solr72/server/solr/node1/solr, should I
actually be grabbing the zoo.cfg from one of my external zk nodes?

Thanks,

Abhi

On Thu, Mar 29, 2018 at 1:04 PM, Abhi Basu <[hidden email]> wrote:

> Ok, will give it a try along with the host name.
>
>
> On Thu, Mar 29, 2018 at 12:20 PM, Webster Homer <[hidden email]>
> wrote:
>
>> This Zookeeper ensemble doesn't look right.
>> >
>> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
>> -p
>> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>>
>>
>> Shouldn't the zookeeper ensemble be specified as:
>>   zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
>>
>> You should put the zookeeper port on each node in the comma separated
>> list.
>> I don't know if this is your problem, but I think your solr nodes will
>> only
>> be connecting to 1 zookeeper
>>
>> On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <[hidden email]
>> >
>> wrote:
>>
>> > I had that problem. Very annoying and we probably should require special
>> > flag to use localhost.
>> >
>> > We need to start solr like this:
>> >
>> > ./solr start -c -h `hostname`
>> >
>> > If anybody ever forgets, we get a 127.0.0.1 node that shows down in
>> > cluster status. No idea how to get rid of that.
>> >
>> > wunder
>> > Walter Underwood
>> > [hidden email]
>> > http://observer.wunderwood.org/  (my blog)
>> >
>> > > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <[hidden email]>
>> wrote:
>> > >
>> > > On 3/29/2018 8:25 AM, Abhi Basu wrote:
>> > >> "Operation create caused
>> > >> exception:":"org.apache.solr.common.SolrException:org.
>> > apache.solr.common.SolrException:
>> > >> Cannot create collection ems-collection. Value of maxShardsPerNode
>> is 1,
>> > >> and the number of nodes currently live or live and part of your
>> > >
>> > > I'm betting that all your nodes are registering themselves with the
>> same
>> > name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
>> address
>> > on the loopback interface.
>> > >
>> > > Usually this problem (on an OS other than Windows, at least) is caused
>> > by an incorrect /etc/hosts file that maps your hostname to a  loopback
>> > address instead of a real address.
>> > >
>> > > You can override the value that SolrCloud uses to register itself into
>> > zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
>> I
>> > think this is the SOLR_HOST variable, which gets translated into
>> -Dhost=XXX
>> > on the java commandline.  It can also be configured in solr.xml.
>> > >
>> > > Thanks,
>> > > Shawn
>> > >
>> >
>> >
>>
>> --
>>
>>
>> This message and any attachment are confidential and may be privileged or
>> otherwise protected from disclosure. If you are not the intended
>> recipient,
>> you must not copy this message or attachment or disclose the contents to
>> any other person. If you have received this transmission in error, please
>> notify the sender immediately and delete the message and any attachment
>> from your system. Merck KGaA, Darmstadt, Germany and any of its
>> subsidiaries do not accept liability for any omissions or errors in this
>> message which may arise as a result of E-Mail-transmission or for damages
>> resulting from any unauthorized changes of the content of this message and
>> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
>> subsidiaries do not guarantee that this message is free of viruses and
>> does
>> not accept liability for any damages caused by any virus transmitted
>> therewith.
>>
>> Click http://www.emdgroup.com/disclaimer to access the German, French,
>> Spanish and Portuguese versions of this disclaimer.
>>
>
>
>
> --
> Abhi Basu
>



--
Abhi Basu
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Abhi Basu
Just an update. Adding hostnames to solr.xml and using "-z
zk1:2181,zk2:2181,zk3:2181" worked and I can see 4 live nodes and able to
create collection with 2S/2R.

Thanks for your help, greatly appreciate it.

Regards,

Abhi

On Thu, Mar 29, 2018 at 1:45 PM, Abhi Basu <[hidden email]> wrote:

> Also, another question, where it says to copy the zoo.cfg from
> /solr72/server/solr folder to /solr72/server/solr/node1/solr, should I
> actually be grabbing the zoo.cfg from one of my external zk nodes?
>
> Thanks,
>
> Abhi
>
> On Thu, Mar 29, 2018 at 1:04 PM, Abhi Basu <[hidden email]> wrote:
>
>> Ok, will give it a try along with the host name.
>>
>>
>> On Thu, Mar 29, 2018 at 12:20 PM, Webster Homer <[hidden email]>
>> wrote:
>>
>>> This Zookeeper ensemble doesn't look right.
>>> >
>>> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
>>> -p
>>> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>>>
>>>
>>> Shouldn't the zookeeper ensemble be specified as:
>>>   zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
>>>
>>> You should put the zookeeper port on each node in the comma separated
>>> list.
>>> I don't know if this is your problem, but I think your solr nodes will
>>> only
>>> be connecting to 1 zookeeper
>>>
>>> On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <
>>> [hidden email]>
>>> wrote:
>>>
>>> > I had that problem. Very annoying and we probably should require
>>> special
>>> > flag to use localhost.
>>> >
>>> > We need to start solr like this:
>>> >
>>> > ./solr start -c -h `hostname`
>>> >
>>> > If anybody ever forgets, we get a 127.0.0.1 node that shows down in
>>> > cluster status. No idea how to get rid of that.
>>> >
>>> > wunder
>>> > Walter Underwood
>>> > [hidden email]
>>> > http://observer.wunderwood.org/  (my blog)
>>> >
>>> > > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <[hidden email]>
>>> wrote:
>>> > >
>>> > > On 3/29/2018 8:25 AM, Abhi Basu wrote:
>>> > >> "Operation create caused
>>> > >> exception:":"org.apache.solr.common.SolrException:org.
>>> > apache.solr.common.SolrException:
>>> > >> Cannot create collection ems-collection. Value of maxShardsPerNode
>>> is 1,
>>> > >> and the number of nodes currently live or live and part of your
>>> > >
>>> > > I'm betting that all your nodes are registering themselves with the
>>> same
>>> > name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
>>> address
>>> > on the loopback interface.
>>> > >
>>> > > Usually this problem (on an OS other than Windows, at least) is
>>> caused
>>> > by an incorrect /etc/hosts file that maps your hostname to a  loopback
>>> > address instead of a real address.
>>> > >
>>> > > You can override the value that SolrCloud uses to register itself
>>> into
>>> > zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
>>> I
>>> > think this is the SOLR_HOST variable, which gets translated into
>>> -Dhost=XXX
>>> > on the java commandline.  It can also be configured in solr.xml.
>>> > >
>>> > > Thanks,
>>> > > Shawn
>>> > >
>>> >
>>> >
>>>
>>> --
>>>
>>>
>>> This message and any attachment are confidential and may be privileged or
>>> otherwise protected from disclosure. If you are not the intended
>>> recipient,
>>> you must not copy this message or attachment or disclose the contents to
>>> any other person. If you have received this transmission in error, please
>>> notify the sender immediately and delete the message and any attachment
>>> from your system. Merck KGaA, Darmstadt, Germany and any of its
>>> subsidiaries do not accept liability for any omissions or errors in this
>>> message which may arise as a result of E-Mail-transmission or for damages
>>> resulting from any unauthorized changes of the content of this message
>>> and
>>> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
>>> subsidiaries do not guarantee that this message is free of viruses and
>>> does
>>> not accept liability for any damages caused by any virus transmitted
>>> therewith.
>>>
>>> Click http://www.emdgroup.com/disclaimer to access the German, French,
>>> Spanish and Portuguese versions of this disclaimer.
>>>
>>
>>
>>
>> --
>> Abhi Basu
>>
>
>
>
> --
> Abhi Basu
>



--
Abhi Basu
Reply | Threaded
Open this post in threaded view
|

Re: Solr 7.2 cannot see all running nodes

Shawn Heisey-2
In reply to this post by Abhi Basu
On 3/29/2018 12:45 PM, Abhi Basu wrote:
> Also, another question, where it says to copy the zoo.cfg from
> /solr72/server/solr folder to /solr72/server/solr/node1/solr, should I
> actually be grabbing the zoo.cfg from one of my external zk nodes?

If you're using zookeeper processes that are separate from Solr, then
zoo.cfg in the solr directory is unimportant.

Doing anything related to zoo.cfg in a solr directory would imply that
you are running Solr with the embedded ZK.  Which is not recommended in
most cases.  The primary issue with the embedded ZK is that when you
stop Solr, you're also stopping ZK.

Thanks,
Shawn