Unable to write response, client closed connection or we are shutting down

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Unable to write response, client closed connection or we are shutting down

Nawab Zada Asad Iqbal
Hi,

I am executing a query performance test against my solr 6.6 setup and I
noticed following exception every now and then. What do I need to do?

Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
o.a.s.s.HttpSolrCall Unable to write response, client closed connection or
we are shutting down
org.eclipse.jetty.io.EofException
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:199)
    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:420)
    at org.eclipse.jetty.io.WriteFlusher.completeWrite(
WriteFlusher.java:375)
    at org.eclipse.jetty.io.SelectChannelEndPoint$3.run(
SelectChannelEndPoint.java:107)
    at org.eclipse.jetty.io.SelectChannelEndPoint.onSelected(
SelectChannelEndPoint.java:193)
    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.
processSelected(ManagedSelector.java:283)
    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(
ManagedSelector.java:181)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
executeProduceConsume(ExecuteProduceConsume.java:249)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
QueuedThreadPool.java:589)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
    at sun.nio.ch.IOUtil.write(IOUtil.java:51)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:177)



Apart from that, I also noticed that the query response time is longer than
I expected, while the memory utilization stays <= 35%. I thought that
somewhere I have set maxThreads (Jetty) to a very low number, however I am
falling back on default which is 10000 (so that shouldn't be a problem).


Thanks
Nawab
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Unable to write response, client closed connection or we are shutting down

Rick Leir-2
Nawab
What test software do you use? What else is happening when the exception occurs?
Cheers -- Rick

On August 12, 2017 1:48:19 PM EDT, Nawab Zada Asad Iqbal <[hidden email]> wrote:

>Hi,
>
>I am executing a query performance test against my solr 6.6 setup and I
>noticed following exception every now and then. What do I need to do?
>
>Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
>o.a.s.s.HttpSolrCall Unable to write response, client closed connection
>or
>we are shutting down
>org.eclipse.jetty.io.EofException
>at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:199)
>    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:420)
>    at org.eclipse.jetty.io.WriteFlusher.completeWrite(
>WriteFlusher.java:375)
>    at org.eclipse.jetty.io.SelectChannelEndPoint$3.run(
>SelectChannelEndPoint.java:107)
>    at org.eclipse.jetty.io.SelectChannelEndPoint.onSelected(
>SelectChannelEndPoint.java:193)
>    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.
>processSelected(ManagedSelector.java:283)
>    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(
>ManagedSelector.java:181)
>    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
>executeProduceConsume(ExecuteProduceConsume.java:249)
>    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
>produceConsume(ExecuteProduceConsume.java:148)
>   at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
>ExecuteProduceConsume.java:136)
>    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
>QueuedThreadPool.java:671)
>    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
>QueuedThreadPool.java:589)
>    at java.lang.Thread.run(Thread.java:748)
>Caused by: java.io.IOException: Broken pipe
>    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>    at sun.nio.ch.IOUtil.write(IOUtil.java:51)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:177)
>
>
>
>Apart from that, I also noticed that the query response time is longer
>than
>I expected, while the memory utilization stays <= 35%. I thought that
>somewhere I have set maxThreads (Jetty) to a very low number, however I
>am
>falling back on default which is 10000 (so that shouldn't be a
>problem).
>
>
>Thanks
>Nawab

--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Unable to write response, client closed connection or we are shutting down

Nawab Zada Asad Iqbal
Hi Rick

My software is not very sophisticated. I have picked some queries from
production logs, which I am replaying against this solr installation. It is
not a SolrCloud but i specify "shards="  in the query to gather results
from all shards.

I found some values to tweak e.g.
    <int name="maxConnectionsPerHost">1500</int>
    <int name="maxConnections">150000</int>

After doing this, the "Unable to write response, client closed connection
or we are shutting down" error is mostly gone.

However, the query perf is bad. The server is not using all the assigned
memory. However CPU usage is reaching 80%.
The query response time is 50+ times worse (e.g., 1400 msec vs 20 msec for
75th percentile) .
What can I do to use more memory and hopefully alleviate some of this bad
performance?

My cache settings are identical to older setup.


Thanks
Nawab





On Mon, Aug 14, 2017 at 9:01 AM, Rick Leir <[hidden email]> wrote:

> Nawab
> What test software do you use? What else is happening when the exception
> occurs?
> Cheers -- Rick
>
> On August 12, 2017 1:48:19 PM EDT, Nawab Zada Asad Iqbal <[hidden email]>
> wrote:
> >Hi,
> >
> >I am executing a query performance test against my solr 6.6 setup and I
> >noticed following exception every now and then. What do I need to do?
> >
> >Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
> >o.a.s.s.HttpSolrCall Unable to write response, client closed connection
> >or
> >we are shutting down
> >org.eclipse.jetty.io.EofException
> >at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:199)
> >    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:420)
> >    at org.eclipse.jetty.io.WriteFlusher.completeWrite(
> >WriteFlusher.java:375)
> >    at org.eclipse.jetty.io.SelectChannelEndPoint$3.run(
> >SelectChannelEndPoint.java:107)
> >    at org.eclipse.jetty.io.SelectChannelEndPoint.onSelected(
> >SelectChannelEndPoint.java:193)
> >    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.
> >processSelected(ManagedSelector.java:283)
> >    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(
> >ManagedSelector.java:181)
> >    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> >executeProduceConsume(ExecuteProduceConsume.java:249)
> >    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> >produceConsume(ExecuteProduceConsume.java:148)
> >   at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
> >ExecuteProduceConsume.java:136)
> >    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> >QueuedThreadPool.java:671)
> >    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
> >QueuedThreadPool.java:589)
> >    at java.lang.Thread.run(Thread.java:748)
> >Caused by: java.io.IOException: Broken pipe
> >    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> >    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> >    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> >    at sun.nio.ch.IOUtil.write(IOUtil.java:51)
> >    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
> >at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:177)
> >
> >
> >
> >Apart from that, I also noticed that the query response time is longer
> >than
> >I expected, while the memory utilization stays <= 35%. I thought that
> >somewhere I have set maxThreads (Jetty) to a very low number, however I
> >am
> >falling back on default which is 10000 (so that shouldn't be a
> >problem).
> >
> >
> >Thanks
> >Nawab
>
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Unable to write response, client closed connection or we are shutting down

Nawab Zada Asad Iqbal
So, I tried few things and it seems like there are more page faults after
the solr6 upgrade. Even when there is no update or query activity (Except
the periodic commit), the pagefaults are little higher than they used to be
.


Any suggestions in this area?

Thanks
Nawab

On Tue, Aug 15, 2017 at 4:09 PM, Nawab Zada Asad Iqbal <[hidden email]>
wrote:

> Hi Rick
>
> My software is not very sophisticated. I have picked some queries from
> production logs, which I am replaying against this solr installation. It is
> not a SolrCloud but i specify "shards="  in the query to gather results
> from all shards.
>
> I found some values to tweak e.g.
>     <int name="maxConnectionsPerHost">1500</int>
>     <int name="maxConnections">150000</int>
>
> After doing this, the "Unable to write response, client closed connection
> or we are shutting down" error is mostly gone.
>
> However, the query perf is bad. The server is not using all the assigned
> memory. However CPU usage is reaching 80%.
> The query response time is 50+ times worse (e.g., 1400 msec vs 20 msec for
> 75th percentile) .
> What can I do to use more memory and hopefully alleviate some of this bad
> performance?
>
> My cache settings are identical to older setup.
>
>
> Thanks
> Nawab
>
>
>
>
>
> On Mon, Aug 14, 2017 at 9:01 AM, Rick Leir <[hidden email]> wrote:
>
>> Nawab
>> What test software do you use? What else is happening when the exception
>> occurs?
>> Cheers -- Rick
>>
>> On August 12, 2017 1:48:19 PM EDT, Nawab Zada Asad Iqbal <
>> [hidden email]> wrote:
>> >Hi,
>> >
>> >I am executing a query performance test against my solr 6.6 setup and I
>> >noticed following exception every now and then. What do I need to do?
>> >
>> >Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
>> >o.a.s.s.HttpSolrCall Unable to write response, client closed connection
>> >or
>> >we are shutting down
>> >org.eclipse.jetty.io.EofException
>> >at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:199)
>> >    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:420)
>> >    at org.eclipse.jetty.io.WriteFlusher.completeWrite(
>> >WriteFlusher.java:375)
>> >    at org.eclipse.jetty.io.SelectChannelEndPoint$3.run(
>> >SelectChannelEndPoint.java:107)
>> >    at org.eclipse.jetty.io.SelectChannelEndPoint.onSelected(
>> >SelectChannelEndPoint.java:193)
>> >    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.
>> >processSelected(ManagedSelector.java:283)
>> >    at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(
>> >ManagedSelector.java:181)
>> >    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
>> >executeProduceConsume(ExecuteProduceConsume.java:249)
>> >    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
>> >produceConsume(ExecuteProduceConsume.java:148)
>> >   at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
>> >ExecuteProduceConsume.java:136)
>> >    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
>> >QueuedThreadPool.java:671)
>> >    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
>> >QueuedThreadPool.java:589)
>> >    at java.lang.Thread.run(Thread.java:748)
>> >Caused by: java.io.IOException: Broken pipe
>> >    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>> >    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>> >    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>> >    at sun.nio.ch.IOUtil.write(IOUtil.java:51)
>> >    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>> >at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:177)
>> >
>> >
>> >
>> >Apart from that, I also noticed that the query response time is longer
>> >than
>> >I expected, while the memory utilization stays <= 35%. I thought that
>> >somewhere I have set maxThreads (Jetty) to a very low number, however I
>> >am
>> >falling back on default which is 10000 (so that shouldn't be a
>> >problem).
>> >
>> >
>> >Thanks
>> >Nawab
>>
>> --
>> Sorry for being brief. Alternate email is rickleir at yahoo dot com
>
>
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Unable to write response, client closed connection or we are shutting down

Shawn Heisey-2
In reply to this post by Nawab Zada Asad Iqbal
On 8/12/2017 11:48 AM, Nawab Zada Asad Iqbal wrote:
> I am executing a query performance test against my solr 6.6 setup and I
> noticed following exception every now and then. What do I need to do?
>
> Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
> o.a.s.s.HttpSolrCall Unable to write response, client closed connection or
> we are shutting down
> org.eclipse.jetty.io.EofException

<snip>

> Caused by: java.io.IOException: Broken pipe

<snip>

> Apart from that, I also noticed that the query response time is longer than
> I expected, while the memory utilization stays <= 35%. I thought that
> somewhere I have set maxThreads (Jetty) to a very low number, however I am
> falling back on default which is 10000 (so that shouldn't be a problem).

The EofException and "broken pipe" messages are typical when the client
closes the TCP connection before Solr finishes processing the request
and sends a response.  When Solr finally finishes working and has a
response, the web container where Solr is running tries to send the
response back, but finds that the connection is gone, and logs the kind
of exception you are seeing.

Very likely what has happened is that the program sending the queries
has a very low socket timeout (or total request timeout) configured on
the http connection, and that the requests are taking longer than that
timeout to execute, so the query software closes the connection.

Later in the thread you mentioned maxConnections.  Some software might
decide to kill existing connections when that limit is exceeded, so more
connections can be opened.  That's something you'd need to discuss with
whoever wrote the software.

Also later in the thread you mentioned "page faults" ... without a lot
of specific detail, we're not going to have any idea what you mean by
that.  I can tell you that if you're looking at operating system memory
counters, page faults are a completely normal part of OS operation.  By
itself, that number won't mean anything.

Long query times can be caused by many things.  One of the most common
is not having enough memory left over for the operating system to
effectively cache your index ... but this is not the only thing that can
cause problems.

Thanks,
Shawn

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Unable to write response, client closed connection or we are shutting down

Nawab Zada Asad Iqbal
Hi Shawn;

Double thanks for answering my whole thread.

Regarding the page fault thing,  that seems to be a concern because this
setup is identical for both solr4 and solr6. Although, I cannot find a good
way to debug it yet.

I found some strange behavior today that my primary solr node (which
handles queries with 'shards' parameter) is  asking a very large number of
'rows' from shard nodes. (I sent this in a different email so that I don't
jumble together different questions in same thread.)


Thanks
Nawab


On Thu, Aug 17, 2017 at 11:32 AM, Shawn Heisey <[hidden email]> wrote:

> On 8/12/2017 11:48 AM, Nawab Zada Asad Iqbal wrote:
> > I am executing a query performance test against my solr 6.6 setup and I
> > noticed following exception every now and then. What do I need to do?
> >
> > Aug 11, 2017 08:40:07 AM INFO  (qtp761960786-250) [   x:filesearch]
> > o.a.s.s.HttpSolrCall Unable to write response, client closed connection
> or
> > we are shutting down
> > org.eclipse.jetty.io.EofException
>
> <snip>
>
> > Caused by: java.io.IOException: Broken pipe
>
> <snip>
>
> > Apart from that, I also noticed that the query response time is longer
> than
> > I expected, while the memory utilization stays <= 35%. I thought that
> > somewhere I have set maxThreads (Jetty) to a very low number, however I
> am
> > falling back on default which is 10000 (so that shouldn't be a problem).
>
> The EofException and "broken pipe" messages are typical when the client
> closes the TCP connection before Solr finishes processing the request
> and sends a response.  When Solr finally finishes working and has a
> response, the web container where Solr is running tries to send the
> response back, but finds that the connection is gone, and logs the kind
> of exception you are seeing.
>
> Very likely what has happened is that the program sending the queries
> has a very low socket timeout (or total request timeout) configured on
> the http connection, and that the requests are taking longer than that
> timeout to execute, so the query software closes the connection.
>
> Later in the thread you mentioned maxConnections.  Some software might
> decide to kill existing connections when that limit is exceeded, so more
> connections can be opened.  That's something you'd need to discuss with
> whoever wrote the software.
>
> Also later in the thread you mentioned "page faults" ... without a lot
> of specific detail, we're not going to have any idea what you mean by
> that.  I can tell you that if you're looking at operating system memory
> counters, page faults are a completely normal part of OS operation.  By
> itself, that number won't mean anything.
>
> Long query times can be caused by many things.  One of the most common
> is not having enough memory left over for the operating system to
> effectively cache your index ... but this is not the only thing that can
> cause problems.
>
> Thanks,
> Shawn
>
>
Loading...