[jira] Created: (NUTCH-481) http.content.limit is broken in the protocol-httpclient plugin

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

[jira] Created: (NUTCH-481) http.content.limit is broken in the protocol-httpclient plugin

JIRA jira@apache.org
http.content.limit is broken in the protocol-httpclient plugin
--------------------------------------------------------------

                 Key: NUTCH-481
                 URL: https://issues.apache.org/jira/browse/NUTCH-481
             Project: Nutch
          Issue Type: Bug
          Components: fetcher
    Affects Versions: 0.9.0
            Reporter: charlie wanek


When using the protocol-httpclient plugin, the entire contents of the request URL is retrieved, regardless of the http.content.limit configuration setting.  (The issue does not affect the protocol-http plugin.)

For very large documents, this leads the Fetcher to believe that the FetcherThread is hung, and the Fetcher aborts its run, logging a warning about hung threads (Fetcher.java:433).

org.apache.nutch.protocol.httpclient.HttpResponse is properly counting the content length, and is breaking its read loop at the proper point.

However, when HttpResponse closes the InputStream from which it is reading, the InputStream object (an org.apache.commons.httpclient.AutoCloseInputStream) continues to read all of the content of the document from the webserver.

Though I'm not certain this is the correct solution, a quick test shows that if HttpResponse is changed to abort the GET, the InputStream correctly aborts the read from the webserver, and the FetcherThread can continue.


--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Updated: (NUTCH-481) http.content.limit is broken in the protocol-httpclient plugin

JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/NUTCH-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

charlie wanek updated NUTCH-481:
--------------------------------

    Attachment: abortatcontentlimit.patch

Patch to org/apache/nutch/protocol/httpclient/HttpResponse.java to abort the GET if the content limit is reached.

> http.content.limit is broken in the protocol-httpclient plugin
> --------------------------------------------------------------
>
>                 Key: NUTCH-481
>                 URL: https://issues.apache.org/jira/browse/NUTCH-481
>             Project: Nutch
>          Issue Type: Bug
>          Components: fetcher
>    Affects Versions: 0.9.0
>            Reporter: charlie wanek
>         Attachments: abortatcontentlimit.patch
>
>
> When using the protocol-httpclient plugin, the entire contents of the request URL is retrieved, regardless of the http.content.limit configuration setting.  (The issue does not affect the protocol-http plugin.)
> For very large documents, this leads the Fetcher to believe that the FetcherThread is hung, and the Fetcher aborts its run, logging a warning about hung threads (Fetcher.java:433).
> org.apache.nutch.protocol.httpclient.HttpResponse is properly counting the content length, and is breaking its read loop at the proper point.
> However, when HttpResponse closes the InputStream from which it is reading, the InputStream object (an org.apache.commons.httpclient.AutoCloseInputStream) continues to read all of the content of the document from the webserver.
> Though I'm not certain this is the correct solution, a quick test shows that if HttpResponse is changed to abort the GET, the InputStream correctly aborts the read from the webserver, and the FetcherThread can continue.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Closed: (NUTCH-481) http.content.limit is broken in the protocol-httpclient plugin

JIRA jira@apache.org
In reply to this post by JIRA jira@apache.org

     [ https://issues.apache.org/jira/browse/NUTCH-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Doğacan Güney closed NUTCH-481.
-------------------------------

       Resolution: Fixed
    Fix Version/s: 1.0.0
         Assignee: Doğacan Güney

Fixed as part of NUTCH-559.

> http.content.limit is broken in the protocol-httpclient plugin
> --------------------------------------------------------------
>
>                 Key: NUTCH-481
>                 URL: https://issues.apache.org/jira/browse/NUTCH-481
>             Project: Nutch
>          Issue Type: Bug
>          Components: fetcher
>    Affects Versions: 0.9.0
>            Reporter: charlie wanek
>            Assignee: Doğacan Güney
>             Fix For: 1.0.0
>
>         Attachments: abortatcontentlimit.patch
>
>
> When using the protocol-httpclient plugin, the entire contents of the request URL is retrieved, regardless of the http.content.limit configuration setting.  (The issue does not affect the protocol-http plugin.)
> For very large documents, this leads the Fetcher to believe that the FetcherThread is hung, and the Fetcher aborts its run, logging a warning about hung threads (Fetcher.java:433).
> org.apache.nutch.protocol.httpclient.HttpResponse is properly counting the content length, and is breaking its read loop at the proper point.
> However, when HttpResponse closes the InputStream from which it is reading, the InputStream object (an org.apache.commons.httpclient.AutoCloseInputStream) continues to read all of the content of the document from the webserver.
> Though I'm not certain this is the correct solution, a quick test shows that if HttpResponse is changed to abort the GET, the InputStream correctly aborts the read from the webserver, and the FetcherThread can continue.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.