weird behavior on hadoop

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

weird behavior on hadoop

Claudio Martella
Hello list,

I have a 3 nodes cluster and I'm running nutch 1.2 on the cluster. I
also have a 4th dev machine that launches hadoop/nutch jobs on the
cluster (the configuration just specifies the jobtracker and the namenode).

when i launch the job from the node running the jobtracker, nutch runs
the crawl successfully.

But when i run the job from the dev machine the crawling stops at depth
1. This is weird because it doesn't complain about exceptions/error or
anything. It's just stopping at the second iteration of the generator.

Basically it injects the seed, it runs the first cycle of generate,
fetch -noParsing, parse, updatedb and at the second generate it stops
because no new ips to fetch are found. As a matter of fact, it even
sends the seed's parse to SOLR.

I copy the nutch directory AS IS, with the script inside from the
cluster node to the dev machine. The only difference is that the user
running the job on the dev machine is different. But the hdfs directory
i crawl into is owned by this user (in fact there's no denied permission
problem).

This is making me crazy. Any idea where I could look at?

--

Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
[hidden email] http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to [hidden email] in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.


Reply | Threaded
Open this post in threaded view
|

Re: weird behavior on hadoop

Andrzej Białecki-2
On 12/14/10 6:50 PM, Claudio Martella wrote:

> Hello list,
>
> I have a 3 nodes cluster and I'm running nutch 1.2 on the cluster. I
> also have a 4th dev machine that launches hadoop/nutch jobs on the
> cluster (the configuration just specifies the jobtracker and the namenode).
>
> when i launch the job from the node running the jobtracker, nutch runs
> the crawl successfully.
>
> But when i run the job from the dev machine the crawling stops at depth
> 1. This is weird because it doesn't complain about exceptions/error or
> anything. It's just stopping at the second iteration of the generator.
>
> Basically it injects the seed, it runs the first cycle of generate,
> fetch -noParsing, parse, updatedb and at the second generate it stops
> because no new ips to fetch are found. As a matter of fact, it even
> sends the seed's parse to SOLR.
>
> I copy the nutch directory AS IS, with the script inside from the
> cluster node to the dev machine. The only difference is that the user
> running the job on the dev machine is different. But the hdfs directory
> i crawl into is owned by this user (in fact there's no denied permission
> problem).
>
> This is making me crazy. Any idea where I could look at?
>

This looks like some environment or property setting issue... The
ultimate answer to this is the job.xml (available via jobtracker UI when
you click on job details), which should contain the right stuff -
especially pay attention to paths, they should be either relative to the
top of job jar, or should point to valid HDFS locations.


--
Best regards,
Andrzej Bialecki     <><
  ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com

Reply | Threaded
Open this post in threaded view
|

Re: weird behavior on hadoop

Claudio Martella
Hi Andrzej,

thanks for the answer. I'll compare the two job.xml files and report back.

On 12/14/10 7:05 PM, Andrzej Bialecki wrote:

> On 12/14/10 6:50 PM, Claudio Martella wrote:
>> Hello list,
>>
>> I have a 3 nodes cluster and I'm running nutch 1.2 on the cluster. I
>> also have a 4th dev machine that launches hadoop/nutch jobs on the
>> cluster (the configuration just specifies the jobtracker and the
>> namenode).
>>
>> when i launch the job from the node running the jobtracker, nutch runs
>> the crawl successfully.
>>
>> But when i run the job from the dev machine the crawling stops at depth
>> 1. This is weird because it doesn't complain about exceptions/error or
>> anything. It's just stopping at the second iteration of the generator.
>>
>> Basically it injects the seed, it runs the first cycle of generate,
>> fetch -noParsing, parse, updatedb and at the second generate it stops
>> because no new ips to fetch are found. As a matter of fact, it even
>> sends the seed's parse to SOLR.
>>
>> I copy the nutch directory AS IS, with the script inside from the
>> cluster node to the dev machine. The only difference is that the user
>> running the job on the dev machine is different. But the hdfs directory
>> i crawl into is owned by this user (in fact there's no denied permission
>> problem).
>>
>> This is making me crazy. Any idea where I could look at?
>>
>
> This looks like some environment or property setting issue... The
> ultimate answer to this is the job.xml (available via jobtracker UI
> when you click on job details), which should contain the right stuff -
> especially pay attention to paths, they should be either relative to
> the top of job jar, or should point to valid HDFS locations.
>
>


--
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
[hidden email] http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to [hidden email] in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.