exception

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

exception

Anton Potekhin
What means error of following type :

 

java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
block for file /user/root/crawl/indexes/index/_0.prx

 

 

Reply | Threaded
Open this post in threaded view
|

Re: exception

Doug Cutting
This is a Hadoop DFS error.  It could mean that you don't have any
datanodes running, or that all your datanodes are full.  Or, it could be
a bug in dfs.  You might try a recent nightly build of Hadoop to see if
it works any better.

Doug

Anton Potehin wrote:

> What means error of following type :
>
>  
>
> java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
> block for file /user/root/crawl/indexes/index/_0.prx
>
>  
>
>  
>
>
Reply | Threaded
Open this post in threaded view
|

RE: exception

Anton Potekhin
We updated hadoop from trunk branch. But now we get new errors:

On tasktarcker side:
<skiped>
java.io.IOException: timed out waiting for response
        at org.apache.hadoop.ipc.Client.call(Client.java:305)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
        at org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown
Source)
        at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
        at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)
060427 062708 Client connection to 10.0.0.10:9001 caught:
java.lang.RuntimeException:
 java.lang.ClassNotFoundException:
java.lang.RuntimeException: java.lang.ClassNotFoundException:
        at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
        at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
        at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
        at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
060427 062708 Client connection to 10.0.0.10:9001: closing


On jobtracker side:
<skiped>
060427 061713 Server handler 3 on 9001 caught:
java.lang.IllegalArgumentException: Ar
gument is not an array
java.lang.IllegalArgumentException: Argument is not an array
        at java.lang.reflect.Array.getLength(Native Method)
        at
org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
        at org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)
<skiped>

-----Original Message-----
From: Doug Cutting [mailto:[hidden email]]
Sent: Thursday, April 27, 2006 12:48 AM
To: [hidden email]
Subject: Re: exception
Importance: High

This is a Hadoop DFS error.  It could mean that you don't have any
datanodes running, or that all your datanodes are full.  Or, it could be
a bug in dfs.  You might try a recent nightly build of Hadoop to see if
it works any better.

Doug

Anton Potehin wrote:

> What means error of following type :
>
>  
>
> java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
> block for file /user/root/crawl/indexes/index/_0.prx
>
>  
>
>  
>
>


Reply | Threaded
Open this post in threaded view
|

Re: exception

Doug Cutting
[hidden email] wrote:
> We updated hadoop from trunk branch. But now we get new errors:

Oops.  Looks like I introduced a bug yesterday.  Let me fix it...

Sorry,

Doug
Reply | Threaded
Open this post in threaded view
|

TRUNK IllegalArgumentException: Argument is not an array (WAS: Re: exception)

Stack-6
In reply to this post by Anton Potekhin
I'm getting same as Anton below trying to launch a new job with latest
from TRUNK.

Logic in ObjectWriteable#readObject seems a little off.  On the way in
we test for a null instance.  If null, we set to NullWriteable.

Next we test declaredClass to see if its an array.  We then try to do an
Array.getLength on instance -- which we've above set as NullWriteable.

Looks like we should test instance to see if its NullWriteable before we
do the Array.getLength (or do the instance null check later).

Hope above helps,
St.Ack



[hidden email] wrote:

> We updated hadoop from trunk branch. But now we get new errors:
>
> On tasktarcker side:
> <skiped>
> java.io.IOException: timed out waiting for response
>         at org.apache.hadoop.ipc.Client.call(Client.java:305)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
>         at org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown
> Source)
>         at
> org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
>         at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
>         at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)
> 060427 062708 Client connection to 10.0.0.10:9001 caught:
> java.lang.RuntimeException:
>  java.lang.ClassNotFoundException:
> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>         at
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
>         at
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
>         at
> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
>         at
> org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
>         at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
> 060427 062708 Client connection to 10.0.0.10:9001: closing
>
>
> On jobtracker side:
> <skiped>
> 060427 061713 Server handler 3 on 9001 caught:
> java.lang.IllegalArgumentException: Ar
> gument is not an array
> java.lang.IllegalArgumentException: Argument is not an array
>         at java.lang.reflect.Array.getLength(Native Method)
>         at
> org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
>         at org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)
> <skiped>
>
> -----Original Message-----
> From: Doug Cutting [mailto:[hidden email]]
> Sent: Thursday, April 27, 2006 12:48 AM
> To: [hidden email]
> Subject: Re: exception
> Importance: High
>
> This is a Hadoop DFS error.  It could mean that you don't have any
> datanodes running, or that all your datanodes are full.  Or, it could be
> a bug in dfs.  You might try a recent nightly build of Hadoop to see if
> it works any better.
>
> Doug
>
> Anton Potehin wrote:
>  
>> What means error of following type :
>>
>>  
>>
>> java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
>> block for file /user/root/crawl/indexes/index/_0.prx
>>
>>  
>>
>>  
>>
>>
>>    
>
>
>  

Reply | Threaded
Open this post in threaded view
|

Re: TRUNK IllegalArgumentException: Argument is not an array (WAS: Re: exception)

Doug Cutting
I just fixed this.  Sorry for the inconvenience!

Doug

Michael Stack wrote:

> I'm getting same as Anton below trying to launch a new job with latest
> from TRUNK.
>
> Logic in ObjectWriteable#readObject seems a little off.  On the way in
> we test for a null instance.  If null, we set to NullWriteable.
>
> Next we test declaredClass to see if its an array.  We then try to do an
> Array.getLength on instance -- which we've above set as NullWriteable.
>
> Looks like we should test instance to see if its NullWriteable before we
> do the Array.getLength (or do the instance null check later).
>
> Hope above helps,
> St.Ack
>
>
>
> [hidden email] wrote:
>
>> We updated hadoop from trunk branch. But now we get new errors:
>>
>> On tasktarcker side:
>> <skiped>
>> java.io.IOException: timed out waiting for response
>>         at org.apache.hadoop.ipc.Client.call(Client.java:305)
>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
>>         at
>> org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown
>> Source)
>>         at
>> org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
>>         at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
>>         at
>> org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)
>> 060427 062708 Client connection to 10.0.0.10:9001 caught:
>> java.lang.RuntimeException:
>>  java.lang.ClassNotFoundException:
>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>>         at
>> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
>>         at
>> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
>>         at
>> org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
>>         at
>> org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
>>         at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
>> 060427 062708 Client connection to 10.0.0.10:9001: closing
>>
>>
>> On jobtracker side:
>> <skiped>
>> 060427 061713 Server handler 3 on 9001 caught:
>> java.lang.IllegalArgumentException: Ar
>> gument is not an array
>> java.lang.IllegalArgumentException: Argument is not an array
>>         at java.lang.reflect.Array.getLength(Native Method)
>>         at
>> org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
>>         at
>> org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)
>> <skiped>
>>
>> -----Original Message-----
>> From: Doug Cutting [mailto:[hidden email]] Sent: Thursday, April
>> 27, 2006 12:48 AM
>> To: [hidden email]
>> Subject: Re: exception
>> Importance: High
>>
>> This is a Hadoop DFS error.  It could mean that you don't have any
>> datanodes running, or that all your datanodes are full.  Or, it could
>> be a bug in dfs.  You might try a recent nightly build of Hadoop to
>> see if it works any better.
>>
>> Doug
>>
>> Anton Potehin wrote:
>>  
>>
>>> What means error of following type :
>>>
>>>  
>>>
>>> java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
>>> block for file /user/root/crawl/indexes/index/_0.prx
>>>
>>>  
>>>
>>>  
>>>
>>>
>>>    
>>
>>
>>
>>  
>
>