[jira] Created: (NUTCH-753) Prevent new Fetcher to retrieve the robots twice

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

[jira] Created: (NUTCH-753) Prevent new Fetcher to retrieve the robots twice

Tim Allison (Jira)
Prevent new Fetcher to retrieve the robots twice
------------------------------------------------

                 Key: NUTCH-753
                 URL: https://issues.apache.org/jira/browse/NUTCH-753
             Project: Nutch
          Issue Type: Improvement
          Components: fetcher
    Affects Versions: 1.0.0
            Reporter: Julien Nioche


The new Fetcher which is now used by default handles the robots file directly instead of relying on the protocol. The options Protocol.CHECK_BLOCKING and Protocol.CHECK_ROBOTS are set to false to prevent fetching the robots.txt twice (in Fetcher + in protocol), which avoids calling robots.isAllowed. However in practice the robots file is still fetched as there is a call to robots.getCrawlDelay() a bit further which is not covered by the if (Protocol.CHECK_ROBOTS).


--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Updated: (NUTCH-753) Prevent new Fetcher to retrieve the robots twice

Tim Allison (Jira)

     [ https://issues.apache.org/jira/browse/NUTCH-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Julien Nioche updated NUTCH-753:
--------------------------------

    Attachment: NUTCH-753.patch

Patch which prevents fetching the robots file twice with the new Fetcher

> Prevent new Fetcher to retrieve the robots twice
> ------------------------------------------------
>
>                 Key: NUTCH-753
>                 URL: https://issues.apache.org/jira/browse/NUTCH-753
>             Project: Nutch
>          Issue Type: Improvement
>          Components: fetcher
>    Affects Versions: 1.0.0
>            Reporter: Julien Nioche
>         Attachments: NUTCH-753.patch
>
>
> The new Fetcher which is now used by default handles the robots file directly instead of relying on the protocol. The options Protocol.CHECK_BLOCKING and Protocol.CHECK_ROBOTS are set to false to prevent fetching the robots.txt twice (in Fetcher + in protocol), which avoids calling robots.isAllowed. However in practice the robots file is still fetched as there is a call to robots.getCrawlDelay() a bit further which is not covered by the if (Protocol.CHECK_ROBOTS).

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (NUTCH-753) Prevent new Fetcher to retrieve the robots twice

Tim Allison (Jira)
In reply to this post by Tim Allison (Jira)

    [ https://issues.apache.org/jira/browse/NUTCH-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12782516#action_12782516 ]

Andrzej Bialecki  commented on NUTCH-753:
-----------------------------------------

Fixed in rev. 884203 - thanks!

> Prevent new Fetcher to retrieve the robots twice
> ------------------------------------------------
>
>                 Key: NUTCH-753
>                 URL: https://issues.apache.org/jira/browse/NUTCH-753
>             Project: Nutch
>          Issue Type: Improvement
>          Components: fetcher
>    Affects Versions: 1.0.0
>            Reporter: Julien Nioche
>            Assignee: Andrzej Bialecki
>             Fix For: 1.1
>
>         Attachments: NUTCH-753.patch
>
>
> The new Fetcher which is now used by default handles the robots file directly instead of relying on the protocol. The options Protocol.CHECK_BLOCKING and Protocol.CHECK_ROBOTS are set to false to prevent fetching the robots.txt twice (in Fetcher + in protocol), which avoids calling robots.isAllowed. However in practice the robots file is still fetched as there is a call to robots.getCrawlDelay() a bit further which is not covered by the if (Protocol.CHECK_ROBOTS).

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Closed: (NUTCH-753) Prevent new Fetcher to retrieve the robots twice

Tim Allison (Jira)
In reply to this post by Tim Allison (Jira)

     [ https://issues.apache.org/jira/browse/NUTCH-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrzej Bialecki  closed NUTCH-753.
-----------------------------------

       Resolution: Fixed
    Fix Version/s: 1.1
         Assignee: Andrzej Bialecki

> Prevent new Fetcher to retrieve the robots twice
> ------------------------------------------------
>
>                 Key: NUTCH-753
>                 URL: https://issues.apache.org/jira/browse/NUTCH-753
>             Project: Nutch
>          Issue Type: Improvement
>          Components: fetcher
>    Affects Versions: 1.0.0
>            Reporter: Julien Nioche
>            Assignee: Andrzej Bialecki
>             Fix For: 1.1
>
>         Attachments: NUTCH-753.patch
>
>
> The new Fetcher which is now used by default handles the robots file directly instead of relying on the protocol. The options Protocol.CHECK_BLOCKING and Protocol.CHECK_ROBOTS are set to false to prevent fetching the robots.txt twice (in Fetcher + in protocol), which avoids calling robots.isAllowed. However in practice the robots file is still fetched as there is a call to robots.getCrawlDelay() a bit further which is not covered by the if (Protocol.CHECK_ROBOTS).

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply | Threaded
Open this post in threaded view
|

[jira] Commented: (NUTCH-753) Prevent new Fetcher to retrieve the robots twice

Tim Allison (Jira)
In reply to this post by Tim Allison (Jira)

    [ https://issues.apache.org/jira/browse/NUTCH-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12783235#action_12783235 ]

Hudson commented on NUTCH-753:
------------------------------

Integrated in Nutch-trunk #995 (See [http://hudson.zones.apache.org/hudson/job/Nutch-trunk/995/])
     Prevent new Fetcher from retrieving the robots twice.


> Prevent new Fetcher to retrieve the robots twice
> ------------------------------------------------
>
>                 Key: NUTCH-753
>                 URL: https://issues.apache.org/jira/browse/NUTCH-753
>             Project: Nutch
>          Issue Type: Improvement
>          Components: fetcher
>    Affects Versions: 1.0.0
>            Reporter: Julien Nioche
>            Assignee: Andrzej Bialecki
>             Fix For: 1.1
>
>         Attachments: NUTCH-753.patch
>
>
> The new Fetcher which is now used by default handles the robots file directly instead of relying on the protocol. The options Protocol.CHECK_BLOCKING and Protocol.CHECK_ROBOTS are set to false to prevent fetching the robots.txt twice (in Fetcher + in protocol), which avoids calling robots.isAllowed. However in practice the robots file is still fetched as there is a call to robots.getCrawlDelay() a bit further which is not covered by the if (Protocol.CHECK_ROBOTS).

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.