Hadoop replication warning

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Hadoop replication warning

HUYLEBROECK Jeremy RD-ILAB-SSF-2

On my dev machine, I have dfs.replication set to 1 in
conf/hadoop-site.xml.

But I get those warning messages in hadoop.log

Fs.FSNamesystem - Replication requested of 2 is larger than cluster
size(1). Using cluster size.
Zero targets found, forbidden1.size=1 forbidden2.size()=0

I don't know where the 2 is coming from as hadoop-default.xml set it to
3 and the hadoop code uses also 3 by default.
Also, the hadoop-site.xml is read properly (fs config is properly used
for instance)

Can I completely ignore those warnings or it means that there is
something bad going on?

I use hadoop-0.4 with nutch-0.8

Reply | Threaded
Open this post in threaded view
|

Re: Hadoop replication warning

kelvin pang
hi,
what's the meaning of Hadoop ?


2006/8/18, HUYLEBROECK Jeremy RD-ILAB-SSF <[hidden email]

>:
>
> On my dev machine, I have dfs.replication set to 1 in
> conf/hadoop-site.xml.
>
> But I get those warning messages in hadoop.log
>
> Fs.FSNamesystem - Replication requested of 2 is larger than cluster
> size(1). Using cluster size.
> Zero targets found, forbidden1.size=1 forbidden2.size()=0
>
> I don't know where the 2 is coming from as hadoop-default.xml set it to
> 3 and the hadoop code uses also 3 by default.
> Also, the hadoop-site.xml is read properly (fs config is properly used
> for instance)
>
> Can I completely ignore those warnings or it means that there is
> something bad going on?
>
> I use hadoop-0.4 with nutch-0.8
>
>


--
kevin
Reply | Threaded
Open this post in threaded view
|

Re: Hadoop replication warning

Andrzej Białecki-2
In reply to this post by HUYLEBROECK Jeremy RD-ILAB-SSF-2
HUYLEBROECK Jeremy RD-ILAB-SSF wrote:

> On my dev machine, I have dfs.replication set to 1 in
> conf/hadoop-site.xml.
>
> But I get those warning messages in hadoop.log
>
> Fs.FSNamesystem - Replication requested of 2 is larger than cluster
> size(1). Using cluster size.
> Zero targets found, forbidden1.size=1 forbidden2.size()=0
>
> I don't know where the 2 is coming from as hadoop-default.xml set it to
> 3 and the hadoop code uses also 3 by default.
> Also, the hadoop-site.xml is read properly (fs config is properly used
> for instance)
>
> Can I completely ignore those warnings or it means that there is
> something bad going on?
>
> I use hadoop-0.4 with nutch-0.8
>
>  

This could indicate a corrupted file (missing blocks). Please run
'hadoop fsck /' and check the output.

--
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com


Reply | Threaded
Open this post in threaded view
|

RE: Hadoop replication warning

HUYLEBROECK Jeremy RD-ILAB-SSF-2
In reply to this post by HUYLEBROECK Jeremy RD-ILAB-SSF-2
'hadoop fsck' says it's HEALTHY
And maybe weirder, it says:
Target Replication factor: 1
Real replication factor : 1.0

At least the value in the config file seems to be read properly.

I'll investigate more.
If you have more clues, let me know.


-----Original Message-----
From: Andrzej Bialecki [mailto:[hidden email]]
Sent: Friday, August 18, 2006 2:40 AM
To: [hidden email]
Subject: Re: Hadoop replication warning

HUYLEBROECK Jeremy RD-ILAB-SSF wrote:

> On my dev machine, I have dfs.replication set to 1 in
> conf/hadoop-site.xml.
>
> But I get those warning messages in hadoop.log
>
> Fs.FSNamesystem - Replication requested of 2 is larger than cluster
> size(1). Using cluster size.
> Zero targets found, forbidden1.size=1 forbidden2.size()=0
>
> I don't know where the 2 is coming from as hadoop-default.xml set it
> to
> 3 and the hadoop code uses also 3 by default.
> Also, the hadoop-site.xml is read properly (fs config is properly used

> for instance)
>
> Can I completely ignore those warnings or it means that there is
> something bad going on?
>
> I use hadoop-0.4 with nutch-0.8
>
>  

This could indicate a corrupted file (missing blocks). Please run
'hadoop fsck /' and check the output.

--
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web ___|||__||
\|  ||  |  Embedded Unix, System Integration http://www.sigram.com
Contact: info at sigram dot com