Hadoop输入分割大小vs块大小

Hadoop输入分割大小vs块大小

本文介绍了Hadoop输入分割大小vs块大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在通过hadoop权威指南,明确解释输入分割。
它就像

1)假设一个64MB的块位于节点A上并在2个其他节点(B,C)之间复制,并且map-reduce程序的输入分割大小为64MB,这个分割对于节点A是否具有位置?或者它是否具有所有三个节点A,b,C的位置?

2)框架决定(挑选)在特定节点上运行的maptask?



3)如果输入拆分大小更大或小于块大小? @ user1668782 / strong>是对这个问题的一个很好的解释,我会尝试给出一个图形描述。 假设我们有一个 400MB ,包含 4条记录 eg :400MB的csv文件,它有4行,每个100MB) $ b




  • 如果HDFS 块大小配置为 128MB strong>,那么4条记录将不会均匀分布在这些块之间。它看起来像这样。






  • Block 1 包含第一条记录的整个第一条记录和一条28MB的第二条记录。

  • 要在 Block 1 上运行,映射器无法处理,因为它不会有整个第二条记录。 这确实是输入分裂解决的问题。 输入拆分尊重逻辑记录界限。 允许假设输入拆分大小为 200MB






  • 因此,输入拆分1 应该同时具有记录1和记录2.输入拆分2不会以记录2开始因为记录2已被分配到输入分割1.输入分割2将以记录3开始。 这是为什么输入拆分只是逻辑块数据。它指向开始和结束的地点。




希望这有助于。


I am going through hadoop definitive guide, where it clearly explains about input splits.It goes like

and

1) let’s say a 64MB block is on node A and replicated among 2 other nodes(B,C), and the input split size for the map-reduce program is 64MB, will this split just have location for node A? Or will it have locations for all the three nodes A,b,C?

2) Since data is local to all the three nodes how the framework decides(picks) a maptask to run on a particular node?

3) How is it handled if the Input Split size is greater or lesser than block size?

解决方案

  • The answer by @user1668782 is a great explanation for the question and I'll try to give a graphical depiction of it.

  • Assume we have a file of 400MB with consists of 4 records(e.g : csv file of 400MB and it has 4 rows, 100MB each)

  • If the HDFS Block Size is configured as 128MB, then the 4 records will not be distributed among the blocks evenly. It will look like this.
  • Block 1 contains the entire first record and a 28MB chunk of the second record.
  • If a mapper is to be run on Block 1, the mapper cannot process since it won't have the entire second record.
  • This is the exact problem that input splits solve. Input splits respects logical record boundaries.

  • Lets Assume the input split size is 200MB

  • Therefore the input split 1 should have both the record 1 and record 2. And input split 2 will not start with the record 2 since record 2 has been assigned to input split 1. Input split 2 will start with record 3.

  • This is why an input split is only a logical chunk of data. It points to start and end locations with in blocks.

Hope this helps.

这篇关于Hadoop输入分割大小vs块大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 03:15