io.file.buffer.size, how hadoop uses it?

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

io.file.buffer.size, how hadoop uses it?

elton sky
I am a bit confused of how this attribute is used.

My understanding is it's related with file read/write. And I can see, in
LineReader.java, it's used as the default buffer size for each line; in
BlockReader.newBlockReader(), it's used as the internal buffer size of the
BufferedInputStream. Also, in compression related classes, it's used as
default buffer size. However, when creating a file (write), bufferSize is
not seemed to be used at all.

E.g.
DFSClient.DFSOutputStream(String src, int buffersize, Progressable progress,
LocatedBlock lastBlock, FileStatus stat,int bytesPerChecksum);
it has a buffersize param, but never used in its definition. In other words,
it's not used for writing at all?

Is this right?