could only be replicated to 0 nodes, instead of 1

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

could only be replicated to 0 nodes, instead of 1

Anthony.Fan
Hi, All

I just start to use Hadoop few days ago. I met the error message
" WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead of 1"
while trying to copy data files to DFS after Hadoop is started.

I did all the settings according to the "Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)"'s instruction, and I don't know what's wrong. Besides, during the process, no error message is written to log files.

Also, according to "http://localhost.localdomain:50070/dfshealth.jsp", I have one live namenode. By the broswer, I even can see the first data file is created in DFS, but the size of it is 0.

Things I've tried:
1. Stop hadoop, re-format DFS and start hadoop again.
2. Change "localhost" to "127.0.0.1"

But neigher of them works.

Could anyone help me or give me a hint?

Thanks.

Anthony
Reply | Threaded
Open this post in threaded view
|

Re: could only be replicated to 0 nodes, instead of 1

Anthony.Fan
The full error message is
09/07/02 16:28:09 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /user/hadoop/count/count/temp1 retries left 1
09/07/02 16:28:12 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1280)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)

        at org.apache.hadoop.ipc.Client.call(Client.java:697)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2814)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2696)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)