[jira] Created: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

[jira] Created: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
----------------------------------------------------------------------------------------------------------------------

                 Key: SOLR-1951
                 URL: https://issues.apache.org/jira/browse/SOLR-1951
             Project: Solr
          Issue Type: Bug
          Components: update
    Affects Versions: 1.4.1, 1.5
         Environment: sun java
solr 1.5 build based on trunk
debian linux "lenny"
            Reporter: Karl Wright


When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:

tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT

Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:

root@duck6:~# netstat -an | fgrep :8983 | wc
  28223  169338 2257840
root@duck6:~#

The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.


--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Updated: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Karl Wright updated SOLR-1951:
------------------------------

    Attachment: solr-1951.zip

This is the test code I'm using.

> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878602#action_12878602 ]

Karl Wright commented on SOLR-1951:
-----------------------------------

A site I found talks about this problem and potential solutions:

>>>>>>
First of all, are the TIME_WAITs client-side or server-side? If server-side, then you
need to redesign your protocol so that your clients initiate the active close of the
connection, whenever possible... (Except for the server occassionally booting
idle/hostile clients, etc...) Generally, a server will be handling clients from many
different machines, so it's far better to spread out the TIME_WAIT load among the
many clients, than it is to make the server bear the full load of them all...

If they're client side, it sounds like you just have a single client, then? And, it's making
a whole bunch of repeated one-shot connections to the server(s)? If so, then you
need to redesign your protocol to add a persistent mode of some kind, so your client
can just reuse a single connection to the server for handling multiple requests, without
needing to open a whole new connection for each one... You'll find your performance
will improve greatly as well, since the set-up/tear-down overhead for TCP is now
adding up to a great deal of your processing, in your current scheme...

However, if you persist in truly wanting to get around TIME_WAIT (and, I think it's a
horribly BAD idea to try to do so, and don't recommend ever doing it), then what you
want is to set "l_linger" to 0... That will force a RST of the TCP connection, thereby
bypassing the normal shutdown procedure, and never entering TIME_WAIT... But,
honestly, DON'T DO THIS! Even if you THINK you know WTF you're doing! It's
just not a good idea, ever... You risk data loss (because your close() of the socket
will now just throw away outstanding data, instead of making sure it's sent), you risk
corruption of future connections (due to reuse of ephemeral ports that would otherwise
be held in TIME_WAIT, if a wandering dup packet happens to show up, or something),
and you break a fundamental feature of TCP that's put there for a very good reason...
All to work around a poorly designed app-level protocol... But, anyway, with that
said, here's the FAQ page on SO_LINGER...
<<<<<<

So, if this can be taken at face value, it would seem to argue that the massive numbers of TIME_WAITs are the result of every document post opening and closing the socket connection to the server, and that the best solution is to keep the socket connection alive for multiple requests. Under http, and jetty, it's not clear yet whether it's possible to achieve that goal.  But a little research should help.

If that doesn't work out, the SO_LINGER = 0 may well do the trick, but I think that might require a change to jetty.


> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878613#action_12878613 ]

Karl Wright commented on SOLR-1951:
-----------------------------------

So, the proper solution appears to be to use http keep-alive.  Jetty apparently supports the Http 1.0 specification for this, which means that you can get Jetty to not close the socket if you simply include the header:

Connection: Keep-Alive

... in each request, and never close the socket but instead reuse it for request after request.

But, unfortunately, keep-alive doesn't seem to work with jetty/Solr.  Only the first request posted (per connection) seems to be recognized.  Subsequent requests are silently eaten.  Either that, or I'm doing something fundamentally wrong.



> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878625#action_12878625 ]

Karl Wright commented on SOLR-1951:
-----------------------------------

When I turn on http 1.1 with Jetty, I get the following in the response header:

Response header: Connection: close

... which indicates that jetty wants the connection closed.  So, unless there's a config parameter I've missed, that's the end of the story: jetty doesn't handle keep-alive, and therefore jetty will *always* have this TIME_WAIT problem.

I guess the next thing to try is Tomcat...


> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878628#action_12878628 ]

Yonik Seeley commented on SOLR-1951:
------------------------------------

From what I've always seen, both jetty and tomcat handle persistent connections just fine.  Problems with this are often in the clients.  Although I've done almost zero testing with multi-part upload in the past, so separating the two issues for testing would probably be best.

> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878641#action_12878641 ]

Karl Wright commented on SOLR-1951:
-----------------------------------

Well, it's hard to get past the consistent "Connection: close" header back from Jetty.  That tells me that it will not allow a connection to survive a request.

I know from previous experience that Tomcat's keep-alive implementation is sound, so that will effectively separate the multipart handling in Jetty from the issue of keep-alive.  Stay tuned.


> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

    [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12878680#action_12878680 ]

Karl Wright commented on SOLR-1951:
-----------------------------------

Ok, so the scoop is the following.  Both jetty and tomcat DO work "fine" - for some definition of fine.  Problem was that jetty responds with an HTTP/1.1 chunked response to an HTTP/1.0 keep-alive request, which certainly was unexpected ;-).  But coding to allow for that made it possible to run properly with keep-alive.  And when I do:

root@duck6:~# netstat -an | fgrep :8983 | wc
     21     126    1680
root@duck6:~#

... which is a much more reasonable number.

So, if this code does not run out of resources mid-run, we may have gotten past the immediate TIME_WAIT problem.  There's still potentially the temp file leak issue, but that will be addressed in a separate ticket, if it turns out to be a problem.




> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Closed: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Karl Wright closed SOLR-1951.
-----------------------------

    Fix Version/s: 1.4.1
                   1.5
       Resolution: Fixed

The overnight indexing run did not die due to resource starvation, so it appears that using keep-alive solves this problem, and no fix in Solr is required.


> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>             Fix For: 1.4.1, 1.5
>
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Reopened: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hoss Man reopened SOLR-1951:
----------------------------


fixing status so it's clear no changes were made

> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

[jira] Resolved: (SOLR-1951) extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/SOLR-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hoss Man resolved SOLR-1951.
----------------------------

    Fix Version/s:     (was: 1.5)
                       (was: 1.4.1)
       Resolution: Invalid

> extractingUpdateHandler doesn't close socket handles promptly, and indexing load tests eventually run out of resources
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-1951
>                 URL: https://issues.apache.org/jira/browse/SOLR-1951
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 1.4.1, 1.5
>         Environment: sun java
> solr 1.5 build based on trunk
> debian linux "lenny"
>            Reporter: Karl Wright
>         Attachments: solr-1951.zip
>
>
> When multiple threads pound on extractingUpdateRequestHandler using multipart form posting over an extended period of time, I'm seeing a huge number of sockets piling up in the following state:
> tcp6       0      0 127.0.0.1:8983          127.0.0.1:44058         TIME_WAIT
> Despite the fact that the client can only have 10 sockets open at a time, huge numbers of sockets accumulate that are in this state:
> root@duck6:~# netstat -an | fgrep :8983 | wc
>   28223  169338 2257840
> root@duck6:~#
> The sheer number of sockets lying around seems to eventually cause commons-fileupload to fail (silently - another bug) in creating a temporary file to contain the content data.  This causes Solr to erroneously return a 400 code with "missing_content_data" or some such to the indexing poster.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]