对于Tez:划分RAM/CORES =最大TEZ容器大小因此,就我而言: 128/32 = 4GB TEZ: 纱:我喜欢运行最大RAM,我可以使用YARN在每个节点上备用,我的内存比建议的要高一些,但是建议的值会导致TEZ/MR作业崩溃,因此76GB在我的情况下可以更好地工作.您需要使用所有这些值! Is there a way to fine tune Hadoop configuration parameters without having to run tests for every possible combination?I am currently working on an 8 nodes cluster and I want to optimize the performances of map reduce task as well as spark performance (running on top of hdfs). 解决方案 The short answer is NO. You need to play around and run smoke tests to determine optimal performance for your cluster. So I would start by checking out theseLinks:https://community.hortonworks.com/articles/103176/hdfs-settings-for-better-hadoop-performance.htmlhttp://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-1/http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-2/Some topics discussed that will effect MapReduce jobs:Configure HDFS block size for optimal performanceAvoid file sizes that are smaller than a block sizeTune DataNode JVM for optimal performanceEnable HDFS short circuit readsAvoid reads or write from stale DataNodesTo give you an idea of how a 4 node 32 core 128GB RAM per node cluster is set up in YARN/TEZ: (From Hadoop multinode cluster too slow. How do I increase speed of data processing?)For Tez: Divide RAM/CORES = Max TEZ Container sizeSo in my case: 128/32 = 4GBTEZ:YARN:I like to run max RAM I can spare per node with YARN, mine is a little higher than recommendations, but the recommended values cause crashes in TEZ/MR jobs so 76GB works better my case. You need to play with all these values! 这篇关于调整Hadoop参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云! 08-13 18:57