HDFS Interview questions

Hadoop
Highlighted
Enthusiast

HDFS Interview questions

1-Why is Data Block size set to 128 MB in Hadoop?
Because of the following reasons Block size is 128 MB:
To reduce the disk seeks (IO). Larger the block size, lesser the file blocks. Thus, less number of disk seeks. And    block can transfer within respectable limits and that to parallelly.
 HDFS have huge data sets, i.e. terabytes and petabytes of data. If we take 4 KB block size for HDFS, just like Linux file system, which has 4 KB block size. Then we would be having too many blocks and therefore too much of metadata. Managing this huge number of blocks and metadata will create huge overhead. Which is something which we don’t want? So, the block size is set to 128 MB.
On the other hand, block size can’t be so large. Because the system will wait for a very long time for the last unit of data processing to finish its work.

 

2-How can one copy a file into HDFS with a different block size to that of existing block size configuration?
By using below commands one can copy a file into HDFS with a different block size:
–Ddfs.blocksize=block_size, where block_size is in bytes.
So, consider an example to explain it in detail:
Suppose, you want to copy a file called test.txt of size, say of 128 MB, into the hdfs. And for this file, you want the block size to be 32MB (33554432 Bytes) in place of the default (128 MB). So, can issue the following command:
Hadoop fs –Ddfs.blocksize=33554432-copyFromlocal/home/dataflair/test.txt/sample_hdfs.
Now, you can check the HDFS block size associated with this file by:
hadoop fs –stat %o/sample_hdfs/test.txt
You can also check it by using the NameNode web UI for seeing the HDFS directory.

For more questions, visit the link: http://data-flair.training/blogs/hdfs-interview-questions-and-answers/