本文介绍了合并输出文件后缩小阶段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在mapreduce中,每个reduce任务将其输出写入名为 part-r-nnnnn 的文件,其中 nnnnn 是与reduce任务关联的分区ID。 map / reduce 合并这些文件吗?如果是的话,怎么样?
解决方案在mapreduce中,每个reduce任务将其输出写入名为 part-r-nnnnn 的文件,其中 nnnnn 是与reduce任务关联的分区ID。 map / reduce 合并这些文件吗?如果是的话,怎么样?
解决方案通过调用:
hadoop fs -getmerge / output / dir / on / hdfs / / desired / local / output / file。 txt
注意本地组合HDFS文件。在运行
In mapreduce each reduce task write its output to a file named part-r-nnnnn where nnnnn is a partition ID associated with the reduce task. Does map/reduce merge these files? If yes, how?
Instead of doing the file merging on your own, you can delegate the entire merging of the reduce output files by calling:
hadoop fs -getmerge /output/dir/on/hdfs/ /desired/local/output/file.txt
Note This combines the HDFS files locally. Make sure you have enough disk space before running
这篇关于合并输出文件后缩小阶段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!