我们当前的HDFS群集的复制因子为1。但是,为了提高性能和可靠性(节点故障),我们希望将Hive中间文件(hive.exec.scratchdir)的复制因子单独增加到5。是否可以实现?

问候,
塞尔瓦

最佳答案

查看 -setrep 是否对您有所帮助。

setrep

用法:

hadoop fs -setrep [-R] [-w] <numReplicas> <path>

更改文件的 replication factor 。如果路径是目录,则命令递归会更改以root为根的目录树下所有文件的 replication factor

选项:
The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.

The -R flag is accepted for backwards compatibility. It has no effect.

示例:
hadoop fs -setrep -w 3 /user/hadoop/dir1

hadoop fs -setrep -R -w 100 /path/to/hive/warehouse

引用:-setrep

关于hadoop - 如何仅更改HIVE的HDFS复制因子,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/33292277/

10-13 07:27