I've recently stood up a SolrCloud 7.7.1 cluster on AWS EC2 instances, with a
dedicated zookeeper ensemble (3.4.13). It looks to have the basic trappings
of a 'test' collection, I can access the Solr admin UI through an SSH tunnel
as well as run the following API calls with success (from a solr node in the
However, attempting the same API calls from a remote client results in
connection resets. It does not appear to be a firewall issue, as neither
netcat nor SSH have issue. The connection is being made over private IP
addresses within the network, from the same machine I use to SSH into a Solr
EC2 instance (10.131.200.233 as an example).
nc -vz 10.131.200.233 8983
found 0 associations
found 1 connections:
src 172.16.253.5 port 50830
dst 10.131.200.233 port 8983
rank info not available
TCP aux info available
Connection to 10.131.200.233 port 8983 [tcp/*] succeeded!
I am unable to access the Solr Admin UI without an SSH tunnel, contrary to
the doc at
internet searches have resulted in a fair bit of confusion, but it seems
like Solr is denying anything not localhost as a security feature. In our
use case we have a number of client applications already deployed on a fleet
of other EC2 instances and would like to give them API search capabilities
against this up-and-coming SolrCloud cluster. I was thinking just to put an
AWS ALB/ELB in front of the Solr nodes, but the primary concern is simply
getting remote queries working in the first place.
Re: Client Connection Resets during Solr API Calls
Resolved this when I noticed that the ELB health checks were able to access
the admin UI. The issue was that port 8983 was indeed blocked from the SSH
machine, as that machine was located over a VPN tunnel into AWS. Our
deployed applications had no issue accessing the test collection as they
resided purely in AWS.