Map/Reduce tracker

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Map/Reduce tracker

alakshman
Hi All

Is there a way to find out on which nodes in my cluster the Map/Reduce jobs
are running after I submit my job ? Also is there anyways to determine given
a file where the different blocks of the file are stored ?

Thanks
A
Reply | Threaded
Open this post in threaded view
|

Re: Map/Reduce tracker

Arun C Murthy-2
On Thu, Jul 19, 2007 at 08:57:42AM -0700, Phantom wrote:
>Hi All
>
>Is there a way to find out on which nodes in my cluster the Map/Reduce jobs
>are running after I submit my job ?

Short answer: No.
Is there a specific reason you need this? Maybe we can try and help you given a more detailed description...

>Also is there anyways to determine given
>a file where the different blocks of the file are stored ?
>
I think
http://lucene.apache.org/hadoop/api/org/apache/hadoop/fs/FilterFileSystem.html#getFileCacheHints(org.apache.hadoop.fs.Path,%20long,%20long)
is what you want...

hth,
Arun
Reply | Threaded
Open this post in threaded view
|

Re: Map/Reduce tracker

alakshman
I would like to understand how the map jobs are assigned. Intuitively it
would seem that the jobs would be assigned to the nodes that contain the
blocks needed for the map task. However this need not be necessarily true.
Figuring where the blocks are placed would help me understand this a little
more.

A

On 7/19/07, Arun C Murthy <[hidden email]> wrote:

>
> On Thu, Jul 19, 2007 at 08:57:42AM -0700, Phantom wrote:
> >Hi All
> >
> >Is there a way to find out on which nodes in my cluster the Map/Reduce
> jobs
> >are running after I submit my job ?
>
> Short answer: No.
> Is there a specific reason you need this? Maybe we can try and help you
> given a more detailed description...
>
> >Also is there anyways to determine given
> >a file where the different blocks of the file are stored ?
> >
> I think
>
> http://lucene.apache.org/hadoop/api/org/apache/hadoop/fs/FilterFileSystem.html#getFileCacheHints(org.apache.hadoop.fs.Path,%20long,%20long)
> is what you want...
>
> hth,
> Arun
>
Reply | Threaded
Open this post in threaded view
|

Re: Map/Reduce tracker

ojh06
Not sure if I'm missing something here, but can you not just point  
your web browser at <ip address of your job tracker>:50030 . Or does  
the information given there not cover what you need?

Quoting Phantom <[hidden email]>:

> I would like to understand how the map jobs are assigned. Intuitively it
> would seem that the jobs would be assigned to the nodes that contain the
> blocks needed for the map task. However this need not be necessarily true.
> Figuring where the blocks are placed would help me understand this a little
> more.
>
> A
>
> On 7/19/07, Arun C Murthy <[hidden email]> wrote:
>>
>> On Thu, Jul 19, 2007 at 08:57:42AM -0700, Phantom wrote:
>>> Hi All
>>>
>>> Is there a way to find out on which nodes in my cluster the Map/Reduce
>> jobs
>>> are running after I submit my job ?
>>
>> Short answer: No.
>> Is there a specific reason you need this? Maybe we can try and help you
>> given a more detailed description...
>>
>>> Also is there anyways to determine given
>>> a file where the different blocks of the file are stored ?
>>>
>> I think
>>
>> http://lucene.apache.org/hadoop/api/org/apache/hadoop/fs/FilterFileSystem.html#getFileCacheHints(org.apache.hadoop.fs.Path,%20long,%20long)
>> is what you want...
>>
>> hth,
>> Arun
>>